score
int64 15
783
| text
stringlengths 897
602k
| url
stringlengths 16
295
| year
int64 13
24
|
---|---|---|---|
18 | |History of geometry|
Euclidean geometry is a mathematical system attributed to the Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry: the Elements. Euclid's method consists in assuming a small set of intuitively appealing axioms, and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated by earlier mathematicians, Euclid was the first to show how these propositions could fit into a comprehensive deductive and logical system. The Elements begins with plane geometry, still taught in secondary school as the first axiomatic system and the first examples of formal proof. It goes on to the solid geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, couched in geometrical language.
For over two thousand years, the adjective "Euclidean" was unnecessary because no other sort of geometry had been conceived. Euclid's axioms seemed so intuitively obvious (with the possible exception of the parallel postulate) that any theorem proved from them was deemed true in an absolute, often metaphysical, sense. Today, however, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. An implication of Einstein's theory of general relativity is that physical space itself is not Euclidean, and Euclidean space is a good approximation for it only where the gravitational field is weak.
The Elements
The Elements are mainly a systematization of earlier knowledge of geometry. Its superiority over earlier treatments was rapidly recognized, with the result that there was little interest in preserving the earlier ones, and they are now nearly all lost.
Books I–IV and VI discuss plane geometry. Many results about plane figures are proved, e.g., If a triangle has two equal angles, then the sides subtended by the angles are equal. The Pythagorean theorem is proved.
Books V and VII–X deal with number theory, with numbers treated geometrically via their representation as line segments with various lengths. Notions such as prime numbers and rational and irrational numbers are introduced. The infinitude of prime numbers is proved.
Books XI–XIII concern solid geometry. A typical result is the 1:3 ratio between the volume of a cone and a cylinder with the same height and base.
Euclidean geometry is an axiomatic system, in which all theorems ("true statements") are derived from a small number of axioms. Near the beginning of the first book of the Elements, Euclid gives five postulates (axioms) for plane geometry, stated in terms of constructions (as translated by Thomas Heath):
"Let the following be postulated":
- "To draw a straight line from any point to any point."
- "To produce [extend] a finite straight line continuously in a straight line."
- "To describe a circle with any centre and distance [radius]."
- "That all right angles are equal to one another."
- The parallel postulate: "That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles."
Although Euclid's statement of the postulates only explicitly asserts the existence of the constructions, they are also taken to be unique.
The Elements also include the following five "common notions":
- Things that are equal to the same thing are also equal to one another (Transitive property of equality).
- If equals are added to equals, then the wholes are equal.
- If equals are subtracted from equals, then the remainders are equal.
- Things that coincide with one another equal one another (Reflexive Property).
- The whole is greater than the part.
Parallel postulate
To the ancients, the parallel postulate seemed less obvious than the others. They were concerned with creating a system which was absolutely rigorous and to them it seemed as if the parallel line postulate should have been able to be proven rather than simply accepted as a fact. It is now known that such a proof is impossible. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the Elements: the first 28 propositions he presents are those that can be proved without it.
- In a plane, through a point not on a given straight line, at most one line can be drawn that never meets the given line.
Methods of proof
Euclidean Geometry is constructive. Postulates 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures, and these assertions are of a constructive nature: that is, we are not only told that certain things exist, but are also given methods for creating them with no more than a compass and an unmarked straightedge. In this sense, Euclidean geometry is more concrete than many modern axiomatic systems such as set theory, which often assert the existence of objects without saying how to construct them, or even assert the existence of objects that cannot be constructed within the theory. Strictly speaking, the lines on paper are models of the objects defined within the formal system, rather than instances of those objects. For example a Euclidean straight line has no width, but any real drawn line will. Though nearly all modern mathematicians consider nonconstructive methods just as sound as constructive ones, Euclid's constructive proofs often supplanted fallacious nonconstructive ones—e.g., some of the Pythagoreans' proofs that involved irrational numbers, which usually required a statement such as "Find the greatest common measure of ..."
Euclid often used proof by contradiction. Euclidean geometry also allows the method of superposition, in which a figure is transferred to another point in space. For example, proposition I.4, side-angle-side congruence of triangles, is proved by moving one of the two triangles so that one of its sides coincides with the other triangle's equal side, and then proving that the other sides coincide as well. Some modern treatments add a sixth postulate, the rigidity of the triangle, which can be used as an alternative to superposition.
System of measurement and arithmetic
Euclidean geometry has two fundamental types of measurements: angle and distance. The angle scale is absolute, and Euclid uses the right angle as his basic unit, so that, e.g., a 45-degree angle would be referred to as half of a right angle. The distance scale is relative; one arbitrarily picks a line segment with a certain length as the unit, and other distances are expressed in relation to it.
A line in Euclidean geometry is a model of the real number line. A line segment is a part of a line that is bounded by two end points, and contains every point on the line between its end points. Addition is represented by a construction in which one line segment is copied onto the end of another line segment to extend its length, and similarly for subtraction.
Measurements of area and volume are derived from distances. For example, a rectangle with a width of 3 and a length of 4 has an area that represents the product, 12. Because this geometrical interpretation of multiplication was limited to three dimensions, there was no direct way of interpreting the product of four or more numbers, and Euclid avoided such products, although they are implied, e.g., in the proof of book IX, proposition 20.
Euclid refers to a pair of lines, or a pair of planar or solid figures, as "equal" (ἴσος) if their lengths, areas, or volumes are equal, and similarly for angles. The stronger term "congruent" refers to the idea that an entire figure is the same size and shape as another figure. Alternatively, two figures are congruent if one can be moved on top of the other so that it matches up with it exactly. (Flipping it over is allowed.) Thus, for example, a 2x6 rectangle and a 3x4 rectangle are equal but not congruent, and the letter R is congruent to its mirror image. Figures that would be congruent except for their differing sizes are referred to as similar. Corresponding angles in a pair of similar shapes are congruent and corresponding sides are in proportion to each other.
Notation and terminology
Naming of points and figures
Points are customarily named using capital letters of the alphabet. Other figures, such as lines, triangles, or circles, are named by listing a sufficient number of points to pick them out unambiguously from the relevant figure, e.g., triangle ABC would typically be a triangle with vertices at points A, B, and C.
Complementary and supplementary angles
Angles whose sum is a right angle are called complementary. Complementary angles are formed when one or more rays share the same vertex and are pointed in a direction that is in between the two original rays that form the right angle. The number of rays in between the two original rays are infinite. Those whose sum is a straight angle are supplementary. Supplementary angles are formed when one or more rays share the same vertex and are pointed in a direction that in between the two original rays that form the straight angle (180 degrees). The number of rays in between the two original rays are infinite like those possible in the complementary angle.
Modern versions of Euclid's notation
Modern school textbooks often define separate figures called lines (infinite), rays (semi-infinite), and line segments (of finite length). Euclid, rather than discussing a ray as an object that extends to infinity in one direction, would normally use locutions such as "if the line is extended to a sufficient length," although he occasionally referred to "infinite lines." A "line" in Euclid could be either straight or curved, and he used the more specific term "straight line" when necessary.
Some important or well known results
Pythagorean theorem: The sum of the areas of the two squares on the legs (a and b) of a right triangle equals the area of the square on the hypotenuse (c).
Bridge of Asses
The Bridge of Asses (Pons Asinorum) states that in isosceles triangles the angles at the base equal one another, and, if the equal straight lines are produced further, then the angles under the base equal one another. Its name may be attributed to its frequent role as the first real test in the Elements of the intelligence of the reader and as a bridge to the harder propositions that followed. It might also be so named because of the geometrical figure's resemblance to a steep bridge that only a sure-footed donkey could cross.
Congruence of triangles
Triangles are congruent if they have all three sides equal (SSS), two sides and the angle between them equal (SAS), or two angles and a side equal (ASA) (Book I, propositions 4, 8, and 26). (Triangles with three equal angles (AAA) are similar, but not necessarily congruent. Also, triangles with two equal sides and an adjacent angle are not necessarily equal or congruent.)
Sum of the angles of a triangle acute, obtuse, and right angle limits
The sum of the angles of a triangle is equal to a straight angle (180 degrees). This causes an equilateral triangle to have 3 interior angles of 60 degrees. Also, it causes every triangle to have at least 2 acute angles and up to 1 obtuse or right angle.
Pythagorean theorem
The celebrated Pythagorean theorem (book I, proposition 47) states that in any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle).
Thales' theorem
Thales' theorem, named after Thales of Miletus states that if A, B, and C are points on a circle where the line AC is a diameter of the circle, then the angle ABC is a right angle. Cantor supposed that Thales proved his theorem by means of Euclid book I, prop 32 after the manner of Euclid book III, prop 31. Tradition has it that Thales sacrificed an ox to celebrate this theorem.
Scaling of area and volume
In modern terminology, the area of a plane figure is proportional to the square of any of its linear dimensions, , and the volume of a solid to the cube, . Euclid proved these results in various special cases such as the area of a circle and the volume of a parallelepipedal solid. Euclid determined some, but not all, of the relevant constants of proportionality. E.g., it was his successor Archimedes who proved that a sphere has 2/3 the volume of the circumscribing cylinder.
|This section requires expansion. (March 2009)|
Because of Euclidean geometry's fundamental status in mathematics, it would be impossible to give more than a representative sampling of applications here.
A surveyor uses a Level
As suggested by the etymology of the word, one of the earliest reasons for interest in geometry was surveying, and certain practical results from Euclidean geometry, such as the right-angle property of the 3-4-5 triangle, were used long before they were proved formally. The fundamental types of measurements in Euclidean geometry are distances and angles, and both of these quantities can be measured directly by a surveyor. Historically, distances were often measured by chains such as Gunter's chain, and angles using graduated circles and, later, the theodolite.
An application of Euclidean solid geometry is the determination of packing arrangements, such as the problem of finding the most efficient packing of spheres in n dimensions. This problem has applications in error detection and correction.
Geometric optics uses Euclidean geometry to analyze the focusing of light by lenses and mirrors.
Geometry is used extensively in architecture.
As a description of the structure of space
Euclid believed that his axioms were self-evident statements about physical reality. Euclid's proofs depend upon assumptions perhaps not obvious in Euclid's fundamental axioms, in particular that certain movements of figures do not change their geometrical properties such as the lengths of sides and interior angles, the so-called Euclidean motions, which include translations and rotations of figures. Taken as a physical description of space, postulate 2 (extending a line) asserts that space does not have holes or boundaries (in other words, space is homogeneous and unbounded); postulate 4 (equality of right angles) says that space is isotropic and figures may be moved to any location while maintaining congruence; and postulate 5 (the parallel postulate) that space is flat (has no intrinsic curvature).
As discussed in more detail below, Einstein's theory of relativity significantly modifies this view.
The ambiguous character of the axioms as originally formulated by Euclid makes it possible for different commentators to disagree about some of their other implications for the structure of space, such as whether or not it is infinite (see below) and what its topology is. Modern, more rigorous reformulations of the system typically aim for a cleaner separation of these issues. Interpreting Euclid's axioms in the spirit of this more modern approach, axioms 1-4 are consistent with either infinite or finite space (as in elliptic geometry), and all five axioms are consistent with a variety of topologies (e.g., a plane, a cylinder, or a torus for two-dimensional Euclidean geometry).
Later work
Archimedes and Apollonius
Archimedes (ca. 287 BCE – ca. 212 BCE), a colorful figure about whom many historical anecdotes are recorded, is remembered along with Euclid as one of the greatest of ancient mathematicians. Although the foundations of his work were put in place by Euclid, his work, unlike Euclid's, is believed to have been entirely original. He proved equations for the volumes and areas of various figures in two and three dimensions, and enunciated the Archimedean property of finite numbers.
Apollonius of Perga (ca. 262 BCE–ca. 190 BCE) is mainly known for his investigation of conic sections.
17th century: Descartes
René Descartes (1596–1650) developed analytic geometry, an alternative method for formalizing geometry which focused on turning geometry into algebra. In this approach, a point is represented by its Cartesian (x, y) coordinates, a line is represented by its equation, and so on. In Euclid's original approach, the Pythagorean theorem follows from Euclid's axioms. In the Cartesian approach, the axioms are the axioms of algebra, and the equation expressing the Pythagorean theorem is then a definition of one of the terms in Euclid's axioms, which are now considered theorems. The equation
In terms of analytic geometry, the restriction of classical geometry to compass and straightedge constructions means a restriction to first- and second-order equations, e.g., y = 2x + 1 (a line), or x2 + y2 = 7 (a circle).
Also in the 17th century, Girard Desargues, motivated by the theory of perspective, introduced the concept of idealized points, lines, and planes at infinity. The result can be considered as a type of generalized geometry, projective geometry, but it can also be used to produce proofs in ordinary Euclidean geometry in which the number of special cases is reduced.
18th century
Geometers of the 18th century struggled to define the boundaries of the Euclidean system. Many tried in vain to prove the fifth postulate from the first four. By 1763 at least 28 different proofs had been published, but all were found incorrect.
Leading up to this period, geometers also tried to determine what constructions could be accomplished in Euclidean geometry. For example, the problem of trisecting an angle with a compass and straightedge is one that naturally occurs within the theory, since the axioms refer to constructive operations that can be carried out with those tools. However, centuries of efforts failed to find a solution to this problem, until Pierre Wantzel published a proof in 1837 that such a construction was impossible. Other constructions that were proved impossible include doubling the cube and squaring the circle. In the case of doubling the cube, the impossibility of the construction originates from the fact that the compass and straightedge method involve first- and second-order equations, while doubling a cube requires the solution of a third-order equation.
Euler discussed a generalization of Euclidean geometry called affine geometry, which retains the fifth postulate unmodified while weakening postulates three and four in a way that eliminates the notions of angle (whence right triangles become meaningless) and of equality of length of line segments in general (whence circles become meaningless) while retaining the notions of parallelism as an equivalence relation between lines, and equality of length of parallel line segments (so line segments continue to have a midpoint).
19th century and non-Euclidean geometry
The century's most significant development in geometry occurred when, around 1830, János Bolyai and Nikolai Ivanovich Lobachevsky separately published work on non-Euclidean geometry, in which the parallel postulate is not valid. Since non-Euclidean geometry is provably relatively consistent with Euclidean geometry, the parallel postulate cannot be proved from the other postulates.
In the 19th century, it was also realized that Euclid's ten axioms and common notions do not suffice to prove all of theorems stated in the Elements. For example, Euclid assumed implicitly that any line contains at least two points, but this assumption cannot be proved from the other axioms, and therefore must be an axiom itself. The very first geometric proof in the Elements, shown in the figure above, is that any line segment is part of a triangle; Euclid constructs this in the usual way, by drawing circles around both endpoints and taking their intersection as the third vertex. His axioms, however, do not guarantee that the circles actually intersect, because they do not assert the geometrical property of continuity, which in Cartesian terms is equivalent to the completeness property of the real numbers. Starting with Moritz Pasch in 1882, many improved axiomatic systems for geometry have been proposed, the best known being those of Hilbert, George Birkhoff, and Tarski.
20th century and general relativity
Einstein's theory of general relativity shows that the true geometry of spacetime is not Euclidean geometry. For example, if a triangle is constructed out of three rays of light, then in general the interior angles do not add up to 180 degrees due to gravity. A relatively weak gravitational field, such as the Earth's or the sun's, is represented by a metric that is approximately, but not exactly, Euclidean. Until the 20th century, there was no technology capable of detecting the deviations from Euclidean geometry, but Einstein predicted that such deviations would exist. They were later verified by observations such as the slight bending of starlight by the Sun during a solar eclipse in 1919, and such considerations are now an integral part of the software that runs the GPS system. It is possible to object to this interpretation of general relativity on the grounds that light rays might be improper physical models of Euclid's lines, or that relativity could be rephrased so as to avoid the geometrical interpretations. However, one of the consequences of Einstein's theory is that there is no possible physical test that can distinguish between a beam of light as a model of a geometrical line and any other physical model. Thus, the only logical possibilities are to accept non-Euclidean geometry as physically real, or to reject the entire notion of physical tests of the axioms of geometry, which can then be imagined as a formal system without any intrinsic real-world meaning.
Treatment of infinity
Infinite objects
Euclid sometimes distinguished explicitly between "finite lines" (e.g., Postulate 2) and "infinite lines" (book I, proposition 12). However, he typically did not make such distinctions unless they were necessary. The postulates do not explicitly refer to infinite lines, although for example some commentators interpret postulate 3, existence of a circle with any radius, as implying that space is infinite.
The notion of infinitesimally small quantities had previously been discussed extensively by the Eleatic School, but nobody had been able to put them on a firm logical basis, with paradoxes such as Zeno's paradox occurring that had not been resolved to universal satisfaction. Euclid used the method of exhaustion rather than infinitesimals.
Later ancient commentators such as Proclus (410–485 CE) treated many questions about infinity as issues demanding proof and, e.g., Proclus claimed to prove the infinite divisibility of a line, based on a proof by contradiction in which he considered the cases of even and odd numbers of points constituting it.
At the turn of the 20th century, Otto Stolz, Paul du Bois-Reymond, Giuseppe Veronese, and others produced controversial work on non-Archimedean models of Euclidean geometry, in which the distance between two points may be infinite or infinitesimal, in the Newton–Leibniz sense. Fifty years later, Abraham Robinson provided a rigorous logical foundation for Veronese's work.
Infinite processes
One reason that the ancients treated the parallel postulate as less certain than the others is that verifying it physically would require us to inspect two lines to check that they never intersected, even at some very distant point, and this inspection could potentially take an infinite amount of time.
The modern formulation of proof by induction was not developed until the 17th century, but some later commentators consider it implicit in some of Euclid's proofs, e.g., the proof of the infinitude of primes.
Supposed paradoxes involving infinite series, such as Zeno's paradox, predated Euclid. Euclid avoided such discussions, giving, for example, the expression for the partial sums of the geometric series in IX.35 without commenting on the possibility of letting the number of terms become infinite.
Logical basis
||This article needs attention from an expert in mathematics. (December 2010)|
|This section requires expansion. (June 2010)|
Classical logic
Euclid frequently used the method of proof by contradiction, and therefore the traditional presentation of Euclidean geometry assumes classical logic, in which every proposition is either true or false, i.e., for any proposition P, the proposition "P or not P" is automatically true.
Modern standards of rigor
Placing Euclidean geometry on a solid axiomatic basis was a preoccupation of mathematicians for centuries. The role of primitive notions, or undefined concepts, was clearly put forward by Alessandro Padoa of the Peano delegation at the 1900 Paris conference:
...when we begin to formulate the theory, we can imagine that the undefined symbols are completely devoid of meaning and that the unproved propositions are simply conditions imposed upon the undefined symbols.
Then, the system of ideas that we have initially chosen is simply one interpretation of the undefined symbols; but..this interpretation can be ignored by the reader, who is free to replace it in his mind by another interpretation.. that satisfies the conditions...
Logical questions thus become completely independent of empirical or psychological questions...The system of undefined symbols can then be regarded as the abstraction obtained from the specialized theories that result when...the system of undefined symbols is successively replaced by each of the interpretations...—Padoa, Essai d'une théorie algébrique des nombre entiers, avec une Introduction logique à une théorie déductive qulelconque
That is, mathematics is context-independent knowledge within a hierarchical framework. As said by Bertrand Russell:
If our hypothesis is about anything, and not about some one or more particular things, then our deductions constitute mathematics. Thus, mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.—Bertrand Russell, Mathematics and the metaphysicians
Axiomatic formulations
Geometry is the science of correct reasoning on incorrect figures.—George Polyá, How to Solve It, p. 208
- Euclid's axioms: In his dissertation to Trinity College, Cambridge, Bertrand Russell summarized the changing role of Euclid's geometry in the minds of philosophers up to that time. It was a conflict between certain knowledge, independent of experiment, and empiricism, requiring experimental input. This issue became clear as it was discovered that the parallel postulate was not necessarily valid and its applicability was an empirical matter, deciding whether the applicable geometry was Euclidean or non-Euclidean.
- Hilbert's axioms: Hilbert's axioms had the goal of identifying a simple and complete set of independent axioms from which the most important geometric theorems could be deduced. The outstanding objectives were to make Euclidean geometry rigorous (avoiding hidden assumptions) and to make clear the ramifications of the parallel postulate.
- Birkhoff's axioms: Birkhoff proposed four postulates for Euclidean geometry that can be confirmed experimentally with scale and protractor. The notions of angle and distance become primitive concepts.
- Tarski's axioms: Alfred Tarski (1902–1983) and his students defined elementary Euclidean geometry as the geometry that can be expressed in first-order logic and does not depend on set theory for its logical basis, in contrast to Hilbert's axioms, which involve point sets. Tarski proved that his axiomatic formulation of elementary Euclidean geometry is consistent and complete in a certain sense: there is an algorithm that, for every proposition, can be shown either true or false. (This doesn't violate Gödel's theorem, because Euclidean geometry cannot describe a sufficient amount of arithmetic for the theorem to apply.) This is equivalent to the decidability of real closed fields, of which elementary Euclidean geometry is a model.
Constructive approaches and pedagogy
The process of abstract axiomatization as exemplified by Hilbert's axioms reduces geometry to theorem proving or predicate logic. In contrast, the Greeks used construction postulates, and emphasized problem solving. For the Greeks, constructions are more primitive than existence propositions, and can be used to prove existence propositions, but not vice versa. To describe problem solving adequately requires a richer system of logical concepts. The contrast in approach may be summarized:
- Axiomatic proof: Proofs are deductive derivations of propositions from primitive premises that are ‘true’ in some sense. The aim is to justify the proposition.
- Analytic proof: Proofs are non-deductive derivations of hypotheses from problems. The aim is to find hypotheses capable of giving a solution to the problem. One can argue that Euclid's axioms were arrived upon in this manner. In particular, it is thought that Euclid felt the parallel postulate was forced upon him, as indicated by his reluctance to make use of it, and his arrival upon it by the method of contradiction.
Andrei Nicholaevich Kolmogorov proposed a problem solving basis for geometry. This work was a precursor of a modern formulation in terms of constructive type theory. This development has implications for pedagogy as well.
If proof simply follows conviction of truth rather than contributing to its construction and is only experienced as a demonstration of something already known to be true, it is likely to remain meaningless and purposeless in the eyes of students.—Celia Hoyles, The curricular shaping of students' approach to proof
See also
- Analytic geometry
- Type theory
- Interactive geometry software
- Non-Euclidean geometry
- Ordered geometry
- Incidence geometry
- Metric geometry
- Birkhoff's axioms
- Hilbert's axioms
- Parallel postulate
- Schopenhauer's criticism of the proofs of the Parallel Postulate
- Cartesian coordinate system
Classical theorems
- Ceva's theorem
- Heron's formula
- Nine-point circle
- Pythagorean theorem
- Menelaus' theorem
- Angle bisector theorem
- Butterfly theorem
- Eves, vol. 1., p. 19
- Eves (1963), vol. 1, p. 10
- Eves, p. 19
- Misner, Thorne, and Wheeler (1973), p. 47
- Euclid, book I, proposition 47
- The assumptions of Euclid are discussed from a modern perspective in Harold E. Wolfe (2007). Introduction to Non-Euclidean Geometry. Mill Press. p. 9. ISBN 1-4067-1852-1.
- tr. Heath, pp. 195–202.
- Ball, p. 56
- Within Euclid's assumptions, it is quite easy to give a formula for area of triangles and squares. However, in a more general context like set theory, it is not as easy to prove that the area of a square is the sum of areas of its pieces, for example. See Lebesgue measure and Banach–Tarski paradox.
- Daniel Shanks (2002). Solved and Unsolved Problems in Number Theory. American Mathematical Society.
- Coxeter, p. 5
- Euclid, book I, proposition 5, tr. Heath, p. 251
- Ignoring the alleged difficulty of Book I, Proposition 5, Sir Thomas L. Heath mentions another interpretation. This rests on the resemblance of the figure's lower straight lines to a steeply-inclined bridge that could be crossed by an ass but not by a horse: "But there is another view (as I have learnt lately) which is more complimentary to the ass. It is that, the figure of the proposition being like that of a trestle bridge, with a ramp at each end which is more practicable the flatter the figure is drawn, the bridge is such that, while a horse could not surmount the ramp, an ass could; in other words, the term is meant to refer to the surefootedness of the ass rather than to any want of intelligence on his part." (in "Excursis II," volume 1 of Heath's translation of The Thirteen Books of the Elements.)
- Euclid, book I, proposition 32
- Heath, p. 135, Extract of page 135
- Heath, p. 318
- Euclid, book XII, proposition 2
- Euclid, book XI, proposition 33
- Ball, p. 66
- Ball, p. 5
- Eves, vol. 1, p. 5; Mlodinow, p. 7
- Tom Hull. "Origami and Geometric Constructions".
- Richard J. Trudeau (2008). "Euclid's axioms". The Non-Euclidean Revolution. Birkhäuser. pp. 39 'ff. ISBN 0-8176-4782-1.
- See, for example: Luciano da Fontoura Costa, Roberto Marcondes Cesar (2001). Shape analysis and classification: theory and practice. CRC Press. p. 314. ISBN 0-8493-3493-4. and Helmut Pottmann, Johannes Wallner (2010). Computational Line Geometry. Springer. p. 60. ISBN 3-642-04017-9. The group of motions underlie the metric notions of geometry. See Felix Klein (2004). Elementary Mathematics from an Advanced Standpoint: Geometry (Reprint of 1939 Macmillan Company ed.). Courier Dover. p. 167. ISBN 0-486-43481-8.
- Roger Penrose (2007). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage Books. p. 29. ISBN 0-679-77631-1.
- Heath, p. 200
- e.g., Tarski (1951)
- Eves, p. 27
- Ball, pp. 268ff
- Eves (1963)
- Hofstadter 1979, p. 91.
- Eves (1963), p. 64
- Ball, p. 485
- * Howard Eves, 1997 (1958). Foundations and Fundamental Concepts of Mathematics. Dover.
- Birkhoff, G. D., 1932, "A Set of Postulates for Plane Geometry (Based on Scale and Protractors)," Annals of Mathematics 33.
- Tarski (1951)
- Misner, Thorne, and Wheeler (1973), p. 191
- Rizos, Chris. University of New South Wales. GPS Satellite Signals. 1999.
- Ball, p. 31
- Heath, p. 268
- Giuseppe Veronese, On Non-Archimedean Geometry, 1908. English translation in Real Numbers, Generalizations of the Reals, and Theories of Continua, ed. Philip Ehrlich, Kluwer, 1994.
- Robinson, Abraham (1966). Non-standard analysis.
- For the assertion that this was the historical reason for the ancients considering the parallel postulate less obvious than the others, see Nagel and Newman 1958, p. 9.
- Cajori (1918), p. 197
- A detailed discussion can be found in James T. Smith (2000). "Chapter 2: Foundations". Methods of geometry. Wiley. pp. 19 ff. ISBN 0-471-25183-6.
- Société française de philosophie (1900). Revue de métaphysique et de morale, Volume 8. Hachette. p. 592.
- Bertrand Russell (2000). "Mathematics and the metaphysicians". In James Roy Newman. The world of mathematics 3 (Reprint of Simon and Schuster 1956 ed.). Courier Dover Publications. p. 1577. ISBN 0-486-41151-6.
- Bertrand Russell (1897). "Introduction". An essay on the foundations of geometry. Cambridge University Press.
- George David Birkhoff, Ralph Beatley (1999). "Chapter 2: The five fundamental principles". Basic Geometry (3rd ed.). AMS Bookstore. pp. 38 ff. ISBN 0-8218-2101-6.
- James T. Smith. "Chapter 3: Elementary Euclidean Geometry". Cited work. pp. 84 ff.
- Edwin E. Moise (1990). Elementary geometry from an advanced standpoint (3rd ed.). Addison–Wesley. ISBN 0-201-50867-2.
- John R. Silvester (2001). "§1.4 Hilbert and Birkhoff". Geometry: ancient and modern. Oxford University Press. ISBN 0-19-850825-5.
- Alfred Tarski (2007). "What is elementary geometry". In Leon Henkin, Patrick Suppes & Alfred Tarski. Studies in Logic and the Foundations of Mathematics – The Axiomatic Method with Special Reference to Geometry and Physics (Proceedings of International Symposium at Berkeley 1957–8; Reprint ed.). Brouwer Press. p. 16. ISBN 1-4067-5355-6. "We regard as elementary that part of Euclidean geometry which can be formulated and established without the help of any set-theoretical devices"
- Keith Simmons (2009). "Tarski's logic". In Dov M. Gabbay, John Woods. Logic from Russell to Church. Elsevier. p. 574. ISBN 0-444-51620-4.
- Franzén, Torkel (2005). Gödel's Theorem: An Incomplete Guide to its Use and Abuse. AK Peters. ISBN 1-56881-238-8. Pp. 25–26.
- Petri Mäenpää (1999). "From backward reduction to configurational analysis". In Michael Otte, Marco Panza. Analysis and synthesis in mathematics: history and philosophy. Springer. p. 210. ISBN 0-7923-4570-3.
- Carlo Cellucci (2008). "Why proof? What is proof?". In Rossella Lupacchini, Giovanna Corsi. Deduction, Computation, Experiment: Exploring the Effectiveness of Proof. Springer. p. 1. ISBN 88-470-0783-6.
- Eric W. Weisstein (2003). "Euclid's postulates". CRC concise encyclopedia of mathematics (2nd ed.). CRC Press. p. 942. ISBN 1-58488-347-2.
- Deborah J. Bennett (2004). Logic made easy: how to know when language deceives you. W. W. Norton & Company. p. 34. ISBN 0-393-05748-8.
- AN Kolmogorov, AF Semenovich, RS Cherkasov (1982). Geometry: A textbook for grades 6–8 of secondary school [Geometriya. Uchebnoe posobie dlya 6–8 klassov srednie shkoly] (3rd ed.). Moscow: "Prosveshchenie" Publishers. pp. 372–376. A description of the approach, which was based upon geometric transformations, can be found in Teaching geometry in the USSR Chernysheva, Firsov, and Teljakovskii
- Viktor Vasilʹevich Prasolov, Vladimir Mikhaĭlovich Tikhomirov (2001). Geometry. AMS Bookstore. p. 198. ISBN 0-8218-2038-9.
- Petri Mäenpää (1998). "Analytic program derivation in type theory". In Giovanni Sambin, Jan M. Smith. Twenty-five years of constructive type theory: proceedings of a congress held in Venice, October 1995. Oxford University Press. p. 113. ISBN 0-19-850127-7.
- Celia Hoyles (Feb. 1997). "The curricular shaping of students' approach to proof". For the Learning of Mathematics (FLM Publishing Association) 17 (1): 7–16. JSTOR 40248217.
- Ball, W.W. Rouse (1960). A Short Account of the History of Mathematics (4th ed. [Reprint. Original publication: London: Macmillan & Co., 1908] ed.). New York: Dover Publications. pp. 50–62. ISBN 0-486-20630-0.
- Coxeter, H.S.M. (1961). Introduction to Geometry. New York: Wiley.
- Eves, Howard (1963). A Survey of Geometry. Allyn and Bacon.
- Heath, Thomas L. (1956). The Thirteen Books of Euclid's Elements (3 vols. ) (2nd ed. [Facsimile. Original publication: Cambridge University Press, 1925] ed.). New York: Dover Publications. ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3) Check
|isbn=value (help). Heath's authoritative translation of Euclid's Elements plus his extensive historical research and detailed commentary throughout the text.
- Misner, Thorne, and Wheeler (1973). Gravitation. W.H. Freeman.
- Mlodinow (2001). Euclid's Window. The Free Press.
- Nagel, E. and Newman, J.R. (1958). Gödel's Proof. New York University Press.
- Alfred Tarski (1951) A Decision Method for Elementary Algebra and Geometry. Univ. of California Press.
- Hazewinkel, Michiel, ed. (2001), "Euclidean geometry", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Hazewinkel, Michiel, ed. (2001), "Plane trigonometry", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Kiran Kedlaya, Geometry Unbound (a treatment using analytic geometry; PDF format, GFDL licensed) | http://en.wikipedia.org/wiki/Euclidean_geometry | 13 |
25 | Students investigate the history of humankind’s need to develop tools that enhance quality of life and satisfy the need for efficiency. Students quickly discover that survival depends on energy resources throughout history and technology dictates how energy resources are used. Students analyze the trade-offs associated with various energy sources. Students evaluate alternative energy sources, experiment with alternative fuels for cars, conduct an appliance survey, and compare how countries are similar or different in their energy use. As a culminating project, students simulate the process of buying a car using data and statistics that could impact their decision.
What is a quality life?
How can we use energy resources responsibly?
How can data and research affect your decisions?What are the positive and negative impacts
technology has had on our world’s energy problems?
What is the measurement process for an emissions test?
What are alternative energy sources and how can we use them?
What problems are associated with fossil fuels?
How can I use scatter plots, histograms, charts, and other data analysis methods to better understand the impact of alternative energy sources?
View how a variety of student-centered assessments are used in the Energy Innovations Unit Plan. These assessments help students and teachers set goals; monitor student progress; provide feedback; assess thinking, processes, performances, and products; and reflect on learning throughout the learning cycle.
Part I: Presenting the Problem
Have students respond to the Essential Question, What is a quality life? In pairs, have students list factors that contribute to a quality life and rank the factors from most important to least. Ask students to share their rankings within a team of four, and create a new ranking based on team consensus. Have each team report out, and then conduct a class discussion on commonalities among the factors. Guide students to the conclusion that many of the factors relate to energy.
Review with students their knowledge of energy concepts using an energy pre-assessment. Use this assessment to guide the development of lessons on energy concepts appropriate for their level.
Ask students to share what specifically has influenced them and their family to conserve energy and what methods they currently use to conserve energy, if any. Create a poster with the cumulative answers from each class. Post somewhere visible in the classroom.Create a multimedia presentation that contains headlines from major newspapers and magazines around the world that focus on energy topics. Guide students to the realization that energy is a driving force behind many political issues, economic policies, and wars. Conduct an open-ended discussion by asking the Unit Question, How can we use our energy resource responsibly? and the question, Are we using our energy resources wisely? (As an alternative, have students create their own slideshow presentations that highlight major stories and events focusing
on energy.) Have students journal how their decisions about conserving energy could impact their quality of life. Inform students that they will use their journals periodically throughout the unit to reflect on their learning.
Part II: Understanding the Problem
Ask students the Content Question, What are alternative energy sources and how can we use them? Provide background information by assigning the Clean Air Acts activity as homework. This activity explains the history of energy conservation and the impact of alternative energy sources. As an alternative, plan with the social studies teachers to implement this assignment.
Sessions 3 through 7
Divide students equally into eight groups. Assign each group one of the following energy sources:
Explain that each group will prepare a learning center on their assigned energy source from which other groups will learn. Present the project plan, which helps each group organize their learning center. Show students how to use bookmarking systems—such as del.icio.us, Diigo, and Furl—to keep track of Internet research. Hand out research notes template for students to use for their research information.
Hand out the energy source rubric, which explains that students will investigate the technology, advantages/disadvantages, history, environment and economic costs, and application of each energy source. Explain to students that this is the type of information that will be displayed in their learning center. Review with students data analysis methods, such as scatter plots, box plots, linear regression, and histograms. Describe how the data analysis methods might be used in the project. Review various types of data and have students practice identifying the types of data from graphs collected prior to the lesson.
Explain that a brochure can also be used as part of their learning center. Show students the brochure sample. Discuss how the brochure can be improved by adding data analysis methods. Brainstorm, as a class, specific data analysis methods that could be used on this brochure. Encourage students to ask questions that can be answered using data analysis methods.
Tell students that they will also prepare and participate in a debate and try to persuade others why their assigned energy source is the best for society and Earth. Have each group create a poster that displays their data and research, and uses spreadsheet software to help analyze their data to use during the debate. As an extension, show student how to work as a group using the Showing Evidence Tool. Explain how claims are made about an energy source and how information is inserted to back up the claims. Encourage students to access their information from the Showing Evidence workspace during the debate.
During this project, set aside time for whole class experiments on combustion, endothermic/exothermic chemical reactions, particulate concentrations, and conservation of energy to help students gain a better understanding of the general principles and impact of energy. If available, have students use science probeware to collect data and practice representing data graphically.
Hand out the group assessment and have students assess their group process and individual work at a designated point during the work sessions. Hand out the debate checklist to inform groups of the expectations during the debate. This checklist could also be used as a scoring guide during the debate.
Conduct the class debate. Instruct each group to assign a debate leader. Have each group review the debate checklist. Allow time for students to assess their groups again using the group assessment after the debate. Provide time for students to complete another entry in their journals on new learning and insights gained through the activities experienced.
After the debate, allow time for students to walk through each learning center. Hand out the data assignment worksheet that explains how each student will use data analysis methods to culminate the information from each learning center; specifically, students analyze gaseous and particulate pollution, efficiency, production and consumption of each energy source. Use the Content Question, What problems are associated with fossil fuels? as an assessment question. Use the fossil fuel essay checklist to help students understand what information to include in the answer to the question.
Part III: Finding Solutions
Now that students have gained a foundation and global perspective on our world’s energy problems, ask students to brainstorm ways they can be part of the solution as a citizen of this world. Present the Unit Question, How can data and research affect your decisions? Ask students how they decide to buy certain products. Conduct a class discussion defining the methods and procedures currently used by students. Have students share methods their parents might use as well. Emphasize any decision making process that uses data and research.
Show pictures of large cities in which air pollution is prevalent to get students thinking about automobiles and their impact on our environment. Explain to students that they will conduct research on how their quality of life is impacted by two major factors:
Present a mini-lesson on the process of measuring vehicle emissions. Have students take notes in their journals. As homework or an extension, instruct students to find information on how emissions testing and standards vary from state to state as well as by country. Have them investigate five large cities from different countries (such as Mexico City, Cairo, Hong Kong, New York, and Los Angeles) to compare the impact of vehicle emissions on the specific city.
Sessions 11 and 12
If materials are available, plan for whole class experiments on solar and hydrogen vehicles, byproducts of combustion, and emissions.
Show students how to use the Visual Ranking Tool to organize their research and evaluation of the different types of fuels sources for vehicles. Prepare the workspace to include the following factors:
Working in pairs, have students conduct Internet research of the different fuel sources and rank which ones they believe to be the best option for most cars. Explain that students are to list major advantages and disadvantages of each and include their reasoning of their rankings. Show students how to compare their thinking with other pairs in the class. Explain the correlation coefficients and how to interpret the numbers. Allow students time to change their ranking based on new insights gained from their peers. Remind students to use their bookmarking systems to keep track of their Internet sources.
Hand out the alternative fuels checklist to guide students’ thinking during this activity. Provide time for students to add another journal entry on new insights and reflections on their current learning.
Session 14—The Homework Project
Explain to students that they will simulate the process of choosing a vehicle to purchase. Instruct students to first pick two vehicles to compare side by side. One vehicle is their choice but the second vehicle must be a brand new green car. Explain the terminology green car and show examples from Web sites of these types of vehicles, also known as Alternative Fuel Vehicles (AFVs).
Hand out the purchase guide instructions that explain the steps required for this project. Have students use the purchase guide checklist and purchase guide self-assessment to assess their work as well as each other’s work throughout the project. Explain that students will investigate the technology behind the alternative fuel source for the AFV they chose and conduct in-depth research on both vehicles’ emissions and the technologies used in their chosen vehicles. They will use data analysis methods to analyze gaseous and particulate pollution, efficiency, production and consumption, and compare each vehicle’s fuel costs for one year based on their individual commuting situation.
Pass out several consumer magazines and have students observe the structure and data included for the products highlighted. As a class, create an electronic consumer magazine that has all the individual projects compiled into one magazine. This magazine should feature the current technologies of vehicles with the independent data collected from each student.
As an extension, students can apply for extra credit to serve on the editorial staff responsible for putting the magazine together with a cover, table of contents, and unifying structure.
Allow students two weeks to complete this project at home. Midway through the project, provide time in class for students to conference with each other on their progress. Provide another copy of the purchase guide checklist for students to use during their peer conference. Explain that students are to continue writing in their journals during the homework project, including insights and reflections on their learning.
Session 16 (after students have completed the vehicle project)
As a final assessment, ask students to answer the following questions reflecting on the projects they completed during the unit:
Hand out the essay rubric as a guide for students to use while answering the questions.
After the editorial staff has completed the class consumer magazine, post the magazine on a Web site for all students and parents to read.
English Language Learner
A teacher participated in the Intel® Teach Program, which resulted in this idea for a classroom project. A team of teachers expanded the plan into the example you see here.
Subjects: Science, Algebra, Social Issues
Topics: Energy, Alternative Fuels, Graphing, Data Analysis
Higher-Order Thinking Skills: Decision Making, Analysis, Interpretation, Evaluation
Key Learnings: Organizing Data, Critical Thinking, Statistical Analysis, Alternative Energy Sources
Time Needed: 16 class periods, 55 minute classes. 2 weeks for homework project | http://www.intel.com/content/www/us/en/education/k12/project-design/unit-plans/energy-innovations.html | 13 |
41 | The correlation is one of the most common and most useful statistics. A correlation is a single number that describes the degree of relationship between two variables. Let's work through an example to show you how this statistic is computed.
Let's assume that we want to look at the relationship between two variables, height (in inches) and self esteem. Perhaps we have a hypothesis that how tall you are effects your self esteem (incidentally, I don't think we have to worry about the direction of causality here -- it's not likely that self esteem causes your height!). Let's say we collect some information on twenty individuals (all male -- we know that the average height differs for males and females so, to keep this example simple we'll just use males). Height is measured in inches. Self esteem is measured based on the average of 10 1-to-5 rating items (where higher scores mean higher self esteem). Here's the data for the 20 cases (don't take this too seriously -- I made this data up to illustrate what a correlation is):
Now, let's take a quick look at the histogram for each variable:
And, here are the descriptive statistics:
Finally, we'll look at the simple bivariate (i.e., two-variable) plot:
You should immediately see in the bivariate plot that the relationship between the variables is a positive one (if you can't see that, review the section on types of relationships) because if you were to fit a single straight line through the dots it would have a positive slope or move up from left to right. Since the correlation is nothing more than a quantitative estimate of the relationship, we would expect a positive correlation.
What does a "positive relationship" mean in this context? It means that, in general, higher scores on one variable tend to be paired with higher scores on the other and that lower scores on one variable tend to be paired with lower scores on the other. You should confirm visually that this is generally true in the plot above.
Calculating the Correlation
Now we're ready to compute the correlation value. The formula for the correlation is:
We use the symbol r to stand for the correlation. Through the magic of mathematics it turns out that r will always be between -1.0 and +1.0. if the correlation is negative, we have a negative relationship; if it's positive, the relationship is positive. You don't need to know how we came up with this formula unless you want to be a statistician. But you probably will need to know how the formula relates to real data -- how you can use the formula to compute the correlation. Let's look at the data we need for the formula. Here's the original data with the other necessary columns:
|Person||Height (x)||Self Esteem (y)||x*y||x*x||y*y|
The first three columns are the same as in the table above. The next three columns are simple computations based on the height and self esteem data. The bottom row consists of the sum of each column. This is all the information we need to compute the correlation. Here are the values from the bottom row of the table (where N is 20 people) as they are related to the symbols in the formula:
Now, when we plug these values into the formula given above, we get the following (I show it here tediously, one step at a time):
So, the correlation for our twenty cases is .73, which is a fairly strong positive relationship. I guess there is a relationship between height and self esteem, at least in this made up data!
Testing the Significance of a Correlation
Once you've computed a correlation, you can determine the probability that the observed correlation occurred by chance. That is, you can conduct a significance test. Most often you are interested in determining the probability that the correlation is a real one and not a chance occurrence. In this case, you are testing the mutually exclusive hypotheses:
|Null Hypothesis:||r = 0|
|Alternative Hypothesis:||r <> 0|
The easiest way to test this hypothesis is to find a statistics book that has a table of critical values of r. Most introductory statistics texts would have a table like this. As in all hypothesis testing, you need to first determine the significance level. Here, I'll use the common significance level of alpha = .05. This means that I am conducting a test where the odds that the correlation is a chance occurrence is no more than 5 out of 100. Before I look up the critical value in a table I also have to compute the degrees of freedom or df. The df is simply equal to N-2 or, in this example, is 20-2 = 18. Finally, I have to decide whether I am doing a one-tailed or two-tailed test. In this example, since I have no strong prior theory to suggest whether the relationship between height and self esteem would be positive or negative, I'll opt for the two-tailed test. With these three pieces of information -- the significance level (alpha = .05)), degrees of freedom (df = 18), and type of test (two-tailed) -- I can now test the significance of the correlation I found. When I look up this value in the handy little table at the back of my statistics book I find that the critical value is .4438. This means that if my correlation is greater than .4438 or less than -.4438 (remember, this is a two-tailed test) I can conclude that the odds are less than 5 out of 100 that this is a chance occurrence. Since my correlation 0f .73 is actually quite a bit higher, I conclude that it is not a chance finding and that the correlation is "statistically significant" (given the parameters of the test). I can reject the null hypothesis and accept the alternative.
The Correlation Matrix
All I've shown you so far is how to compute a correlation between two variables. In most studies we have considerably more than two variables. Let's say we have a study with 10 interval-level variables and we want to estimate the relationships among all of them (i.e., between all possible pairs of variables). In this instance, we have 45 unique correlations to estimate (more later on how I knew that!). We could do the above computations 45 times to obtain the correlations. Or we could use just about any statistics program to automatically compute all 45 with a simple click of the mouse.
I used a simple statistics program to generate random data for 10 variables with 20 cases (i.e., persons) for each variable. Then, I told the program to compute the correlations among these variables. Here's the result:
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C1 1.000 C2 0.274 1.000 C3 -0.134 -0.269 1.000 C4 0.201 -0.153 0.075 1.000 C5 -0.129 -0.166 0.278 -0.011 1.000 C6 -0.095 0.280 -0.348 -0.378 -0.009 1.000 C7 0.171 -0.122 0.288 0.086 0.193 0.002 1.000 C8 0.219 0.242 -0.380 -0.227 -0.551 0.324 -0.082 1.000 C9 0.518 0.238 0.002 0.082 -0.015 0.304 0.347 -0.013 1.000 C10 0.299 0.568 0.165 -0.122 -0.106 -0.169 0.243 0.014 0.352 1.000
This type of table is called a correlation matrix. It lists the variable names (C1-C10) down the first column and across the first row. The diagonal of a correlation matrix (i.e., the numbers that go from the upper left corner to the lower right) always consists of ones. That's because these are the correlations between each variable and itself (and a variable is always perfectly correlated with itself). This statistical program only shows the lower triangle of the correlation matrix. In every correlation matrix there are two triangles that are the values below and to the left of the diagonal (lower triangle) and above and to the right of the diagonal (upper triangle). There is no reason to print both triangles because the two triangles of a correlation matrix are always mirror images of each other (the correlation of variable x with variable y is always equal to the correlation of variable y with variable x). When a matrix has this mirror-image quality above and below the diagonal we refer to it as a symmetric matrix. A correlation matrix is always a symmetric matrix.
To locate the correlation for any pair of variables, find the value in the table for the row and column intersection for those two variables. For instance, to find the correlation between variables C5 and C2, I look for where row C2 and column C5 is (in this case it's blank because it falls in the upper triangle area) and where row C5 and column C2 is and, in the second case, I find that the correlation is -.166.
OK, so how did I know that there are 45 unique correlations when we have 10 variables? There's a handy simple little formula that tells how many pairs (e.g., correlations) there are for any number of variables:
where N is the number of variables. In the example, I had 10 variables, so I know I have (10 * 9)/2 = 90/2 = 45 pairs.
The specific type of correlation I've illustrated here is known as the Pearson Product Moment Correlation. It is appropriate when both variables are measured at an interval level. However there are a wide variety of other types of correlations for other circumstances. for instance, if you have two ordinal variables, you could use the Spearman rank Order Correlation (rho) or the Kendall rank order Correlation (tau). When one measure is a continuous interval level one and the other is dichotomous (i.e., two-category) you can use the Point-Biserial Correlation. For other situations, consulting the web-based statistics selection program, Selecting Statistics at http://trochim.human.cornell.edu/selstat/ssstart.htm.
Copyright ©2006, William M.K. Trochim, All Rights Reserved
Purchase a printed copy of the Research Methods Knowledge Base
Last Revised: 10/20/2006 | http://www.socialresearchmethods.net/kb/statcorr.php | 13 |
18 | Lewis Carroll's Diagram and Categorical Syllogisms
Aristotelian logic, or the traditional study of deduction, deals with four so-called categorical or subject-predicate propositions, which can be defined by S a P ⇔ All S is P (universal affirmative or A proposition), S i P ⇔ Some S is P (particular affirmative or I proposition), S e P ⇔ No S is P (universal negative or E proposition), S o P ⇔ Some S is not P (particular negative or O proposition).
S is called the subject (or minor) term and P is called the predicate (or major) term of the proposition. A categorical syllogism is a deductive argument about categorical propositions in which a conclusion is inferred from two premises. The term M that occurs in both premises is called the middle term. An example of a syllogism is M a P, S i M ⊨ S i P. There are 256 possible triples of categorical propositions, but only 24 valid syllogisms, some of which need existential support (Ex(S), Ex(P), or Ex(M), where Ex means "there is" or "there exists").
The aim of this Demonstration is to falsify (when the checkbox "valid syllogism" is unchecked) the argument, making the premises true and the conclusion false. According to Venn diagrams, a shaded region R means it is empty and + in the region R means existence of at least one element there (proposition Ex(R) is true). (To shade a region, click the appropriate button. A click in a region introduces a new locator (+) and Alt-click removes it.) Such an example is known as a counterexample.
Choosing a valid syllogism means that making the premises true will make the conclusion true as well. So an attempt to make the conclusion false will end in a contradiction (+ in the empty region).
The so-called figure of a categorical syllogism is determined by the possible position of middle term. There are four figures:
M x P, S x M ⊨ S x P, P x M, S x M ⊨ S x P, M x P, M x S ⊨ S x P, P x M, M x S ⊨ S x P, where x is a, i, e, or o.
This version of Carroll's diagrams was found in , p. 112. See also the Wikipedia entry for Categorical proposition.
R. Audi, ed., The Cambridge Dictionary of Philosophy, Cambridge: Cambridge University Press, 1995 pp. 780–782. L. Borkowski, Elementy logiki formalnej (Elements of Formal Logic, in Polish), 3rd ed., Warsaw: Wyd, 1976. L. Carroll, Symbolic Logic and the Game of Logic, New York: Dover, 1958. I. M. Copi and C. Cohen, Introduction to Logic, 9th ed., New York: Macmillan, 1994 pp. 214–218. | http://demonstrations.wolfram.com/LewisCarrollsDiagramAndCategoricalSyllogisms/ | 13 |
24 | Working 2,000 years before the development of calculus, the Greek mathematician Archimedes worked out a simple formula for the volume of a sphere:
Of his many mathematical contributions, Archimedes was most proud of this result, even going so far as to ask that the method he used to work out the formula -- a diagram circumscribing a cylinder inside a sphere -- be imprinted on his gravestone.
Archimedes' formula may have been a stroke of scientific genius in 250 B.C., but with the help of modern calculus the derivation is extremely simple. In this post I'll explain one way to derive the famous formula, and explain how it can be done in dimensions other than the usual three.
Consider the diagram below. It's a sphere with radius r. The goal is to find the volume, and here's how we do that.
Notice that one thing we can easily find is the area of a single horizontal slice of the ball. This is the shaded disk at the top of the diagram, which is drawn at height z. The disk has a radius of x, which we'll need to find the area of the disk. To find x, we can form a right triangle with sides z and x, and hypotenuse r. This is drawn in the figure. Then we can easily solve for x.
By the Pythagorean theorem, we know that , so solving for x we have . Then the area of the shaded disk is simply pi times the radius squared, or .
Now that we have the area of one horizontal disk, we want to find the area of all horizontal disks inside the ball summed together. That will give us the volume of the sphere. To do this, we simply take the definite integral of the disk area formula from above for all possible heights z, which are between -r (at the bottom of the ball) and r (at the top of the ball). That is, our volume is given by
Which is the volume formula we were looking for.
This same logic can be used to derive formulas for the volume of a "ball" in 4, 5, and higher dimensions as well. Doing so, you can show that the volume of a unit ball in one dimension (a line) is just 2; the volume in two dimensions (a disk) is , and -- as we've just shown -- the volume in three dimensions (a sphere) is . Continuing on to four, five, and ultimately n dimensions, a surprising result appears.
It turns out the volume of a unit ball peaks at five dimensions, and then proceeds to shrink thereafter, ultimately approaching zero as the dimension n goes to infinity. You can read more about this beautiful mathematical result here.
Addendum: You can find a clear explanation for how the volume-of-spheres formula generalizes to n dimensions on page 888-889 here.
A while back I posted a long proof of the Bolzano-Weierstrass theorem -- also known as the "sequential compactness theorem" -- which basically says every sequence that's bounded has a subsequence within it that converges. Here's a much shorter and simpler version of it.
First we'll prove a lemma that shows for any sequence we can always find a monotone subsequence -- that is, a subsequence that's always increasing or decreasing.
Lemma. Every sequence has a monotone subsequence.
Proof. Let be a sequence. Define a "peak" of as an element such that for all . That is is a peak if forever after that point going forward, there is no other element of the sequence that is greater than . Intuitively, think of shining a flashlight from the right onto the "mountain range" of a sequence's plotted elements. If the light hits an element, that element is a peak.
If has infinitely many peaks, then collect those peaks into a subsequence . This is a monotone decreasing subsequence, as required.
If has finitely many peaks, take n to be the position of the last peak. Then we know is not a peak. So there exists an such that . Call this point . We also know that is not a peak either. So there also exists an such that . Call this point .
Continuing, we can create a subsequence that is monotone increasing. In either case -- if our sequence has infinite or finitely many peaks -- we can always find a monotone subsequence, as required.
Now that we've proved the above lemma, the proof of the Bolzano-Weierstrass theorem follows easily.
Theorem (Bolzano-Weierstrass).Every bounded sequence has a convergent subsequence.
Proof. By the previous lemma, every sequence has a monotone subsequence. Call this . Since is bounded by assumption, then the subsequence is also bounded. So by the monotone convergence theorem, since is monotone and bounded, it converges. So every bounded sequence has a convergent subsequence, completing the proof.
Most people think of linear algebra as a tool for solving systems of linear equations. While it definitely helps with that, the theory of linear algebra goes much deeper, providing powerful insights into many other areas of math.
In this post I'll explain a powerful and surprising application of linear algebra to another field of mathematics -- calculus. I'll explain how the fundamental calculus operations of differentiation and integration can be understood instead as a linear transformation. This is the "linear algebra" view of basic calculus.
Taking Derivatives as a Linear Transformation
In linear algebra, the concept of a vector space is very general. Anything can be a vector space as long as it follows two rules.
The first rule is that if u and v are in the space, then u + v must also be in the space. Mathematicians call this "closed under addition." Second, if u is in the space and c is a constant, then cu must also be in the space. This is known as "closed under scalar multiplication." Any collection of objects that follows those two rules -- they can be vectors, functions, matrices and more -- qualifies as a vector space.
One of the more interesting vector spaces is the set of polynomials of degree less than or equal to n. This is the set of all functions that have the following form:
where a0...an are constants.
Is this really a vector space? To check, we can verify that it follows our two rules from above. First, if p(t) and q(t) are both polynomials, then p(t) + q(t) is also a polynomial. That shows it's closed under addition. Second, if p(t) is a polynomial, so is c times p(t), where c is a constant. That shows it's closed under scalar multiplication. So the set of polynomials of degree at most n is indeed a vector space.
Now let's think about calculus. One of the first methods we learn is taking derivatives of polynomials. It's easy. If our polynomial is ax^2 + 3x, then our first derivative is 2ax + 3. This is true for all polynomials. So the general first derivative of an nth degree polynomial is given by:
The question is: is this also a vector space? To answer that, we check to see that it follows our two rules above. First, if we add any two derivatives together, the result will still be the derivative of some polynomial. Second, if we multiply any derivative by a constant c, this will still be the derivative of some polynomial. So the set of first derivatives of polynomials is also a vector space.
Now that we know polynomials and their first derivatives are both vector spaces, we can think of the operation "take the derivative" as a rule that maps "things in the first vector space" to "things in the second vector space." That is, taking the derivative of a polynomial is a "linear transformation" that maps one vector space (the set of all polynomials of degree at most n) into another vector space (the set of all first derivatives of polynomials of degree at most n).
If we call the set of polynomials , then the set of derivatives of this is , since taking the first derivative will reduce the degree of each polynomial term by 1. Thus, the operation "take the derivative" is just a function that maps . A similar argument shows that "taking the integral" is also a linear transformation in the opposite direction, from .
Once we realize differentiation and integration from calculus is really just a linear transformation, we can describe them using the tools of linear algebra.
Here's how we do that. To fully describe any linear transformation as a matrix multiplication in linear algebra, we follow three steps.
First, we find a basis for the subspace in the domain of the transformation. That is, if our transformation is from , we first write down a basis for .
Next, we feed each element of this basis through the linear transformation, and see what comes out the other side. That is, we apply the transformation to each element of the basis, which gives the "image" of each element under the transformation. Since every element of the domain is some combination of those basis elements, by running them through the transformation we can see the impact the transformation will have on any element in the domain.
Finally, we collect each of those resulting images into the columns of a matrix. That is, each time we run an element of the basis through the linear transformation, the output will be a vector (the "image" of the basis element). We then place these vectors into a matrix D, one in each column from left to right. That matrix D will fully represent our linear transformation.
An Example for Third-Degree Polynomials
Here's an example of how to do this for , the set of all polynomials of at most degree 3. This is the set of all functions of the following form:
where at a0...a3 are constants. When we apply our transformation, "take the derivative of this polynomial," it will reduce the degree of each term in our polynomial by one. Thus, the transformation D will be a linear mapping from to , which we write as .
To find the matrix representation for our transformation, we follow our three steps above: find a basis for the domain, apply the transformation to each basis element, and compile the resulting images into columns of a matrix.
First we find a basis for . The simplest basis is the following: 1, t, t^2, and t^3. All third-degree polynomials will be some linear combination of these four elements. In vector notation, we say that a basis for is given by:
Now that we have a basis for our domain , the next step is to feed the elements of it into the linear transformation to see what it does to them. Our linear transformation is, "take the first derivative of the element." So to find the "image" of each element, we just take the first derivative.
The first element of the basis is 1. The derivative of this is just zero. That is, the transformation D maps the vector (1, 0, 0, 0) to (0, 0, 0). Our second element is t. The derivative of this is just one. So the transformation D maps our second basis vector (0, t, 0, 0) to (1, 0, 0). Similarly for our third and fourth basis vectors, the transformation maps (0, 0, t^2, 0) to (0, 2t, 0), and it maps (0, 0, 0, t^3) to (0, 0, 3t^2).
Applying our transformation to the four basis vectors, we get the following four images under D:
Now that we've applied our linear transformation to each of our four basis vectors, we next collect the resulting images into the columns of a matrix. This is the matrix we're looking for -- it fully describes the action of differentiation for any third-degree polynomial in one simple matrix.
Collecting our four image vectors into a matrix, we have:
This matrix gives the linear algebra view of differentiation from calculus. Using it, we can find the derivative of any polynomial of degree three by expressing it as a vector and multiplying by this matrix.
For example, consider the polynomial . Note that the first derivative of this polynomial is ; we'll use this in a minute. In vector form, this polynomial can be written as:
To find its derivative, we simply multiply this vector by our D matrix from above:
which is exactly the first derivative of our polynomial function!
This is a powerful tool. By recognizing that differentiation is just a linear transformation -- as is integration, which follows a similar argument that I'll leave as an exercise -- we can see it's really just a rule that linearly maps functions in to functions in .
In fact, all m x n matrices can be understood in this way. That is, an m x n matrix is just a linear mapping that sends vectors in into . In the case of the example above, we have a 3 x 4 matrix that sends polynomials in (such as ax^3 + bx^2 + cx +d, which has four elements) into the space of first derivatives in (in this case, 3ax^2 + 2bx +c, which has three elements).
For more on linear transformations, here's a useful lecture from MIT's Gilbert Strang.
Exact differential equations are interesting and easy to solve. But you wouldn't know it from the way they're taught in most textbooks. Many authors stumble through pages of algebra trying to explain the method, leaving students baffled.
Thankfully, there's an easier way to understand exact differential equations. Years ago, I tried to come up with the simplest possible way of explaining the method. Here's what I came up with.
The entire method of solving exact differential equations can be boiled down to the diagram below: "Exact ODEs in a Nutshell."
Recall that exact ODEs are ones that we can write as M(x.y) + N(x,y)*y' = 0, where M and N are continuous functions, and y' is dy/dx. Here is how to read the diagram.
Starting with an exact ODE, we're on the second line labeled "starting point." We have functions M and N, and our goal is to move upward toward the top line labeled "goal." That is, given an exact ODE, we want to find a solution F(x,y) = c whose first partial derivatives are Fx (which is just the function M) and Fy (which is the function N).
Before we do anything, we check that our equation is really exact. To do this, we move to the bottom line labeled "test for exactness." That is, we take the derivative of Fx = M with respect to y (giving us Fxy = My). And we take the derivative of Fy = N with respect to x (which gives us Fyx = Nx). Set these equal to each other. A basic theorem from calculus says that the mixed partial derivatives Fxy and Fyx will be the same for any function F(x,y). If they're equal, F(x,y) on the top line is guaranteed to exist.
Now we can solve for the function F(x.y). The diagram makes it easy to see how. We know M(x,y) is just the first partial derivative of F with respect to x. So we can move upward toward F(x,y) by integrating M with respect to x. Similarly, we know the function N(x,y) is just the first partial derivative of F(x,y) with respect to y, so we can find another candidate for F by integrating N with respect to y.
In the end, we'll have two candidates for F(x,y). Sometimes they're the same, in which case we're done. Sometimes they're different, as one will have a term the other won't have -- a term that got dropped to zero as we differentiated from F(x,y) to either Fx or Fy, since it's a function of only one of x or y. This is easy to solve: just combine all the terms from both candidates for F(x,y), omitting any duplicate terms. This will be our solution F(x,y) = c.
Try using this method on a few examples here. I think you'll find it's much simpler -- and easier to remember years later -- than the round-about method used in most textbooks.
One of the first things students learn in statistics is the "correlation coefficient" r, which measures the strength of the relationship between two variables. The formula given in most textbooks is something like the following:
where x and y are the data sets we're trying to measure the correlation of.
This formula can be useful, but also has some major disadvantages. It's complex, hard to remember, and gives students almost no insight into what the correlation coefficient is really measuring. In this post I'll explain an alternative way of thinking about "r" as the cosine of the angle between two vectors. This is the "linear algebra view" of the correlation coefficient.
A Different View of Correlation
The idea behind the correlation coefficient is that we want a standard measure of how "related" two data sets x and y are. Rather than thinking about data sets, imagine instead that we place our x and y data into two vectors u and v. These will be two n-dimensional arrows pointing through space. The question is: how "similar" are these arrows to each other? As we'll see below, the answer is given by the correlation coefficient between them.
The figure below illustrates the idea of measuring the "similarity" of two vectors v1 and v2. In the figure, the vectors are separated by an angle theta. A pretty good measure of how "similar" they are is the cosine of theta. Think about what cosine is doing. If both v1 and v2 point in roughly the same direction, the cosine of theta will be about 1. If they point in opposite directions, it will be -1. And if they are perpendicular or orthogonal, it will be 0. In this way, the cosine of theta fits our intuition pretty well about what it means for two vectors to be "correlated" with each other.
What is the cosine of theta in the figure? From the geometry of right triangles, recall that the cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse. In the figure, we form a right triangle by projecting the vector v1 down onto v2. This gives us a new vector p. The cosine of theta is then given by:
Now suppose we're interested in the correlation between two data sets x and y. Imagine we normalize x and y by subtracting from each data point the mean of the data set. Let's call these new normalized data sets u and v. So we have:
The question is, how "correlated" or "similar" are these vectors u and v to each other in space? That is, what is the cosine of the angle between u and v? This is simple: from the formula derived above, the cosine is given by:
But since and , this means the cosine of theta is just the correlation coefficient between the two vectors u and v, or:
From this perspective, the correlation coefficient has an elegant geometric interpretation. If two data sets are positively correlated, they should roughly "point in the same direction" when placed into n-dimensional vectors. If they're uncorrelated, they should point in directions that are orthogonal to each other. And if they're negatively correlated, they should point in roughly opposite directions.
The cosine of the angle between two vectors nicely fits that intuition about correlation. So it's no surprise the two ideas are ultimately the same thing -- a much simpler interpretation of "r" than the usual textbook formulas.
Developing a good model for population growth is a pretty common problem faced by economists doing applied work. In this post, I'll walk through the derivation of a simple but flexible model I've used in the past known as the "logistic" model of population growth.
The Naïve Model: Exponential Growth
The simplest way of modeling population is to assume "exponential" growth. That is, just assume population grows by some annual rate, forever. If we let "y" be a city's population and "k" be the annual growth rate, the exponential growth model is given by
This is a simple first-order differential equation. We can solve this for "y" by using a technique called "separation of variables". First, we separate variables like this:
Then we integrate both sides and solve for y, as follows:
Since C is just an arbitrary constant, we can let e^C just equal C, which gives us
where k is the annual growth rate, t is the number of years from today, and C is the population at time t=0. This is the famous "exponential growth" model.
While the exponential model is useful for short-term forecasts, it gives unrealistic estimates for long time periods. After just a few decades, population would rapidly grow toward infinity in this model. A more realistic model should capture the idea that population does not grow forever, but instead levels off around some long-term level. This leads us to our second model.
A Better Model: Logistic Growth
We can improve the above model by making a simple adjustment. Let "A" be the maximum long-term population a city can reasonably sustain. Then multiply the model above by a factor (1 - y/A), giving us
In this model, the population starts out growing exponentially. But as "y" approaches the maximum level "A", the term (1 - y/A) approaches zero, slowing down the growth rate. In the long run, growth will slow to a crawl as cities approach their maximum sustainable size -- a much more reasonable way to model population growth. This is known as the "logistic" model.
To solve for "y," we can again use separation of variables. However, we'll first need to use a trick from algebra known as the "partial fractions decomposition."
An Aside: The Partial Fractions Decomposition
The partial fractions decomposition is a theorem about rational functions of the form P(x)/Q(x). Here is what it says. If P(x) and Q(x) are polynomials, and P(x)/Q(x) is "proper" -- that is, the order of P(x) is less than the order of Q(x) -- then we can "decompose" P(x)/Q(x) as follows:
where a1...an are the n roots of the polynomial Q(x), and C1...Cn are constants. Using this theorem, we can decompose hard-to-handle rational functions into much simpler pieces -- something we'll need to do to solve the logistic population model above.
Back to the Model: Solving the Logistic Equation
Recall that the logistic population model is given by:
Separating variables, we have:
The term on the left-hand side is hard to integrate as written. Since it's a proper rational polynomial function, we can now use the partial fractions decomposition to simplify it. By the theorem above, we can rewrite it as:
To solve for C1 and C2, first multiply both sides by y(1 - y/A) to clear the denominators, like this:
This equation is true for all values of y. To solve for C1 and C2, simply plug in values for y that allow us to solve for them. To solve for C1, let y = 0. This "zeros out" C2 in the equation and lets us solve for C1, as follows:
To solve for C2 we repeat the process, plugging in a value for y that "zeros out" C1. To do this, Let y = A, and solve for C2 as follows:
Using these constants, now we can rewrite our original function using the partial fractions decomposition as follows:
This simpler function can then be plugged into our integration problem above, allowing us to integrate the logistic model and solve for y. Returning to our problem, we have:
Integrating both sides and solving for y, we have:
To solve for "C" in the equation, note that if we let t=0, C = y0/(1 - y0/A) where y0 is the beginning population. Plugging in and solving for y, we then have,
This is the famous "logistic model" of population growth. This basic model can then be used to develop pretty reasonable long-term forecasts for city populations.
(See this post at Columbia Economics, LLC for the related comment thread.)
A common problem with time-series data is getting them into the right time interval. Some data are daily or weekly, while others are in monthly, quarterly or annual intervals. Since most regression models require consistent time intervals, an econometrician's first job is usually getting the data into the same frequency.
In this post I'll explain how to solve a common problem I've run into: how to divide quarterly data into monthly data. To do so, we'll use a method known as "cubic spline interpolation." In the example below we use Matlab and Excel. For Stata users, I've posted a Stata do file that illustrates how to work through the below example in Stata.
Cubic Spline Interpolation
One of the most widely used data sources in economics is the National Income and Product Accounts (NIPAs) from the U.S. Bureau of Economic Analysis. They're the official source for U.S. GDP, personal income, trade flows and more. Unfortunately, most data are published only quarterly or annually. So if you're hoping to run a regression using monthly observations -- for example, this simple estimate of the price elasticity of demand for gasoline -- you'll need to split these quarterly data into monthly ones.
A common way to do this is by "cubic spline interpolation." Here's how it works. We start with n quarterly data points. That means we have n-1 spaces between them. Across each space, we draw a unique 3rd-degree (or "cubic") polynomial connecting the two points. This is called a "piecewise polynomial" function.
To make sure our connecting lines form a smooth line, we force all our first and second derivatives to be continuous; that is, at each connecting point we make them equal to the derivitive on either side. When all these requirements are met -- along with a couple end-point conditions you can read about here -- we have a (4n-4) x (4n-4) linear system that can be solved for the coefficients of all n-1 cubic polynomials.
Once we have these n-1 piecewise polynomials, we can plug in x values for whatever time intervals we want: monthly, weekly or even daily. The polynomials will give us a pretty good interpolation between our known quarterly data points.
An Example Using MATLAB
While the above method seems simple, doing cubic splines by hand is not. A spline for just four data points requires setting up and solving a 12 x 12 linear system, then manually evaluating three different polynomials at the desired x values. That's a lot of work. To get a sense of how hard this is, here's my own Excel file showing what's involved in fitting a cubic spline to four data points by hand.
In practice, the best way to do a cubic spline is to use MATLAB. It takes about five minutes. Here's how to do it.
MATLAB has a built-in "spline()" function that does the dirty work of cubic spline interpolation for you. It requires three inputs: a list of x values from the quarterly data you want to split; a list of y values from the quarterly data; and a list of x values for the monthly time intervals you want. The spline() function formulates the n-1 cubic polynomials, evaluates them at your desired x values, and gives you a list of interpolated monthly y values.
Here's an Excel file showing how to use MATLAB to split quarterly data into monthly. In the file, the first two columns are quarterly values from BEA's Personal Income series. Our goal is to convert these into monthly values. The next three columns (highlighted in yellow) are the three inputs MATLAB needs: the original quarterly x values (x); the original quarterly y values (y); and the desired monthly x values (xx).
In the Excel file, note that the first quarter is listed as month 2, the second quarter as month 5, and so on. Why is this? BEA's quarterly data represent an average value over the three-month quarter. That means they should be treated as a mid-point of the quarter. For Q1 that's month 2, for Q2 that's month 5, and so on.
The next step is to open MATLAB and paste in these three columns of data. In MATLAB, type " x = [ ", cut and paste the column of x values in from Excel, type " ] " and hit return. This creates an n x 1 vector with the x values. Repeat this for the y, and xx values in the Excel file.
Once you have x, y, and xx defined in MATLAB, type "yy = spline(x,y,xx)" and hit return. This will create a new vector yy with the interpolated monthly y values we're looking for. Each entry in yy will correspond to one of the x values you specified in the xx vector.
Copy these yy values from MATLAB, paste them into Excel, and we're done. We now have an estimated monthly Personal Income series.
Here's an Excel file summarizing the above example for splitting quarterly Personal Income data into monthly using MATLAB. Also, here's a MATLAB file with the x, y, xx, and yy vectors from the above exercise.
Note: For Stata users, here's a "do" file with an example that performs the above cubic spline interpolation in mata.
The Fibonacci sequence is a beautiful mathematical concept, making surprise appearances in everything from seashell patterns to the Parthenon. It's easy to write down the first few terms -- it starts with 0, 1, 1, 2, 3, 5, 8, 11, with each term equal to the sum of the previous two. But what about the 100th or 100,000th term? Can we find them without doing thousands of calculations?
In a previous post, I explained how simple linear models can help us understand where systems like this are headed in the long run. In this post, I'll explain how the same tools can help us see the "long run" values of the Fibonacci sequence. The result is an elegant model that illustrates the connection between the Fibonacci sequence and the so-called "golden ratio", an aesthetic principle appearing throughout art, architecture, music, book design and more.
Setting Up the Model
Each term in the Fibonacci sequence equals the sum of the previous two. That is, if we let Fn denote the nth term in the sequence we can write . To make this into a linear system, we need at least one more equation. Let's use an easy one: . Putting these together, here's our system of linear equations for the Fibonacci sequence:
We can write this in matrix notation as the following:
Here's what this is saying. If we start with the vector on the right and multiply it by this 2 x 2 matrix, the result is the vector on the left. That is, multiplying our starting vector by the matrix above gives us the next element in the sequence. The n-1 element from the right becomes the n element on the left, and the n-2 element becomes the n-1 element. Each time we multiply by this matrix we're moving the system from one term of the Fibonacci sequence to the next.
As explained here, the general form for dynamic linear models like this is:
where u0 is the initial state, u1 is the final state, and A is the transformation matrix that moves us from one state to the next. As explained in the earlier post, we can use eigenvalues and eigenvectors to solve for the long run values of systems like this.
Here's how we do that. Let S be a 2 x 2 matrix where the columns are the two eigenvectors of our transformation matrix A. And let be a 2 x 2 matrix with the two eigenvalues of A along the diagonal and zeros elsewhere. By the definition of eigenvalues and eigenvectors, we have the following identity:
Solving for A, we have:
Plugging this into our above equation relating u0 and u1, we have the following system:
This equation relates the initial state vector u0 to the next state u1. But each time we multiply by A it moves us to the next state, and the next state, and so on. Imagine we multiply k times. We get the following general form:
This equation is what we're looking for. It allows us to quickly find the kth term in the Fibonacci sequence with a simple calculation. It relies only on the initial state vector u0 and the eigenvalues and eigenvectors of the transformation matrix A. That's our linear model.
Putting the Pieces Together
Now that we have a model, let's find the S, and other parts that make it up.
The first step is to find the eigenvalues of the A matrix. This is given by:
We find the eigenvalues by solving Ax = lambda x for the vector lambda that makes that equation true (that's the definition of an eigenvalue). That's the same as solving Ax - lambda x = 0, or (A - lambda)x = 0. This implies the matrix (A - lambda) is singular or not invertable. So the determinant of (A - lambda) must be zero. That means we can find the eigenvalues by setting the determinant of (A - lambda) equal to zero and solving for lambda (this is called the "characteristic polynomial"). Here's how we do that:
Plugging this into the quadratic formula with a = 1, b = -1 and c = -1, you'll get the following two solutions, which are our two eigenvalues:
For a quick check that these are right, note that the trace of A is 1. The sum of the two eigenvalues is also 1. Since the eigenvalues must always sum to the trace, we've got the right values here.
The next step is to find the two eigenvectors that correspond to the eigenvalues. Let's start with . To do this, we write the following:
This implies that . Our "free" variable here is x2, so we'll let that equal 1. Then we can solve for x1. We get the following:
Using the old algebra trick for the difference of squares -- that is, -- we can simplify this by multiplying both numerator and denominator by as follows:
So our first eigenvector v1 is equal to:
Following the same process for give us the other eigenvector:
Note that the vectors v1 and v2 are orthogonal to each other. That is, the dot product between them is zero. This comes from a basic theorem of linear algebra: every symmetric matrix will have orthogonal eigenvectors. Since A is symmetric, our two eigenvectors are perpendicular to each other.
Now that we have the eigenvalues and eigenvectors, we can write the S and matrices as follows:
To complete our model, we also need to know the inverse of the S matrix. Thankfully, there's a simple formula for the inverse of a 2 x 2 matrix. If A is given by:
then the inverse of A is found by transposing the diagonal terms, putting a negative sign on the off-diagonal terms, and multiplying by 1/(determinant of A), or
Using that formula, we get:
As explained in our earlier post, our final step is to write our initial state vector u0 as a combination of the columns of S. That is, we can write:
where c is a 2 x 1 vector of scalars. Solving for c, we get:
For this example, let's let our initial state vector u0 be (1,1). These are the second and third terms of the Fibonacci sequence. Note that you can use any two subsequent terms for this step -- I'm just using (1,1) because I like the way the math works for it. So we have:
Since , that means
Putting everything together, we can write our final model for the Fibonacci sequence:
Multiplying this out, we get the following extensive form of the model:
This equation gives the k+3 and k+2 terms of the Fibonacci sequence as a function of just one variable: k. This allows us to easily find any term we'd like -- just plug in k. For example, imagine we want the 100th term. Simply let k = 98, and solve the above for F101 and F100. This is a huge number, by the way -- 218,922,995,834,555,000,000 -- something you can easily verify in this Excel spreadsheet. (Note that whether this is the 99th or 100th term depends on whether you label 0 or 1 to be the first term of the sequence; here I've made zero the first term, but many others use 1.)
So what happens to the above system as k grows very large? Do the terms in the Fibonacci sequence display some regular pattern as we move outward?
All changes from one term to the next are determined by k in the above model. Imagine k grows very large. In the second term in the equation, we can see that
That means that as k gets bigger, the second term in the equation goes to zero. That leaves only the first term. That means as
So as k grows large, the ratio of the k+1 to the k term in the Fibonacci sequence approaches a constant, or
This is a pretty amazing result. As we move far out in the Fibonacci sequence, the ratio of two subsequent terms approaches a constant. And that constant is equal to the first eigenvalue of our linear system above.
More importantly, this value is also equal to the famous "golden ratio", which appears in myriad forms throughout Western art, architecture, music and more. In the limit, the ratio of subsequent terms in the Fibonacci sequence equals to the golden ratio -- something that's not obvious at all, but which we can verify analytically using our model above.
If you'd like to see how this works in a spreadsheet, here's an Excel file where you can plug in values for k and find the k+2 and k+1 terms in the sequence.
At first glance, the binomial distribution and the Poisson distribution seem unrelated. But a closer look reveals a pretty interesting relationship. It turns out the Poisson distribution is just a special case of the binomial -- where the number of trials is large, and the probability of success in any given one is small.
In this post I'll walk through a simple proof showing that the Poisson distribution is really just the binomial with n approaching infinity and p approaching zero.
The binomial distribution works when we have a fixed number of events n, each with a constant probability of success p. Imagine we don't know the number of trials that will happen. Instead, we only know the average number of successes per time period. So we know the rate of successes per day, but not the number of trials n or the probability of success p that led to that rate.
Define a number . Let this be the rate of successes per day. It's equal to np. That's the number of trials n -- however many there are -- times the chance of success p for each of those trials. Think of it like this: if the chance of success is p and we run n trials per day, we'll observe np successes per day on average. That's our observed success rate lambda.
Recall that the binomial distribution looks like this:
As mentioned above, let's define lambda as follows:
Solving for p, we get:
What we're going to do here is substitute this expression for p into the binomial distribution above, and take the limit as n goes to infinity, and try to come up with something useful. That is,
Pulling out the constants and and splitting the term on the right that's to the power of (n-k) into a term to the power of n and one to the power of -k, we get
Now let's take the limit of this right-hand side one term at a time. We'll do this in three steps. The first step is to find the limit of
In the numerator, we can expand n! into n terms of (n)(n-1)(n-2)...(1). And in the denominator, we can expand (n-k) into n-k terms of (n-k)(n-k-1)(n-k-2)...(1). That is,
Written this way, it's clear that many of terms on the top and bottom cancel out. The (n-k)(n-k-1)...(1) terms cancel from both the numerator and denominator, leaving the following:
Since we canceled out n-k terms, the numerator here is left with k terms, from n to n-k+1. So this has k terms in the numerator, and k terms in the denominator since n is to the power of k. Expanding out the numerator and denominator we can rewrite this as:
This has k terms. Clearly, every one of these k terms approaches 1 as n approaches infinity. So we know this portion of the problem just simplifies to one. So we're done with the first step.
The second step is to find the limit of the term in the middle of our equation, which is
Recall that the definition of e = 2.718... is given by the following:
Our goal here is to find a way to manipulate our expression to look more like the definition of e, which we know the limit of. Let's define a number x as . Now let's substitute this into our expression and take the limit as follows:
This terms just simplifies to e^(-lambda). So we're done with our second step. That leaves only one more term for us to find the limit of. Our third and final step is to find the limit of the last term on the right, which is
This is pretty simple. As n approaches infinity, this term becomes 1^(-k) which is equal to one. And that takes care of our last term.
Putting these three results together, we can rewrite our original limit as
This just simplifies to the following:
This is equal to the familiar probability density function for the Poisson distribution, which gives us the probability of k successes per period given our parameter lambda. So we've shown that the Poisson distribution is just a special case of the binomial, in which the number of n trials grows to infinity and the chance of success in any particular trial approaches zero. And that completes the proof.
The method of Lagrange multipliers is the economist's workhorse for solving optimization problems. The technique is a centerpiece of economic theory, but unfortunately it's usually taught poorly. Most textbooks focus on mechanically cranking out formulas, leaving students mystified about why it actually works to begin with.
In this post, I'll explain a simple way of seeing why Lagrange multipliers actually do what they do -- that is, solve constrained optimization problems through the use of a semi-mysterious Lagrangian function.
Before you can see why the method works, you've got to know something about gradients. For functions of one variable there is -- usually -- one first derivative. For functions of n variables, there are n first derivatives. A gradient is just a vector that collects all the function's partial first derivatives in one place.
Each element in the gradient is one of the function's partial first derivatives. An easy way to think of a gradient is that if we pick a point on some function, it gives us the "direction" the function is heading. If our function is labeled , the notation for the gradient of f is .
The most important thing to know about gradients is that they always point in the direction of a function's steepest slope at a given point. To help illustrate this, take a look at the drawing below. It illustrates how gradients work for a two-variable function of x1 and x2.
The function f in the drawing forms a hill. Toward the peak I've drawn two regions where we hold the height of f constant at some level a. These are called level curves of f, and they're marked f = a1, and f = a2.
Imagine yourself standing on one of those level curves. Think of a hiking trail on a mountainside. Standing on the trail, in what direction is the mountain steepest? Clearly the steepest direction is straight up the hill, perpendicular to the trail. In the drawing, these paths of steepest ascent are marked with arrows. These are the gradients at various points along the level curves. Just as the steepest hike is always perpendicular to our trail, the gradients of f are always perpendicular to its level curves.
That's the key idea here: level curves are where , and .
How the Method Works
To see how Lagrange multipliers work, take a look at the drawing below. I've redrawn the function f from above, along with a constraint g = c. In the drawing, the constraint is a plane that cuts through our hillside. I've also drawn a couple level curves of f.
Our goal here is to climb as high on the hill as we can, given that we can't move any higher than where the constraint g = c cuts the hill. In the drawing, the boundary where the constraint cuts the function is marked with a heavy line. Along that line are the highest points we can reach without stepping over our constraint. That's an obvious place to start looking for a constrained maximum.
Imagine hiking from left to right on the constraint line. As we gain elevation, we walk through various level curves of f. I've marked two in the picture. At each level curve, imagine checking its slope -- that is, the slope of a tangent line to it -- and comparing that to the slope on the constraint where we're standing. If our slope is greater than the level curve, we can reach a higher point on the hill if we keep moving right. If our slope is less than the level curve -- say, toward the right where our constraint line is declining -- we need to move backward to the left to reach a higher point.
When we reach a point where the slope of the constraint line just equals the slope of the level curve, we've moved as high as we can. That is, we've reached our constrained maximum. Any movement from that point will take us downhill. In the figure, this point is marked with a large arrow pointing toward the peak.
At that point, the level curve f = a2 and the constraint have the same slope. That means they're parallel and point in the same direction. But as we saw above, gradients are always perpendicular to level curves. So if these two curves are parallel, their gradients must also be parallel. That means the gradients of f and g both point in the same direction, and differ at most by a scalar. Let's call that scalar . That is,
Solving for zero, we get
This is the condition that must hold when we've reached the maximum of f subject to the constraint g = c.
Now, if we're clever we can write a single equation that will capture this idea. This is where the familiar Lagrangian equation comes in:
or more explicitly,
To see how this equation works, watch what happens when follow the usual Lagrangian procedure.
First, we find the three partial first derivatives of L,
and set them equal to zero. That is, we need to set the gradient equal to zero.
To find , we take the three partial derivatives of L with respect to x1, x2 and lambda. Then we place each as an element in a 3 x 1 vector. That gives us the following:
Recall that we have two "rules" to follow here. First, the gradients of f and g must point in the same direction, or . And second, we have to satisfy our constraint, or .
The first and second elements of make sure the first rule is followed. That is, they force , assuring that the gradients of f and g both point in the same direction. The third element of is simply a trick to make sure g = c, which is our constraint. In the Lagrangian function, when we take the partial derivative with respect to , it simply returns back to us our original constraint equation.
At this point, we have three equations in three unknowns. So we can solve this for the optimal values of x1 and x2 that maximize f subject to our constraint. And we're done. So the bottom line is that Lagrange multipliers is really just an algorithm that finds where the gradient of a function points in the same direction as the gradients of its constraints, while also satisfying those constraints.
As with most areas of mathematics, once you see to the bottom of things -- in this case, that optimization is really just hill-climbing, which everyone understands -- things are a lot simpler than most economists make them out to be.
Most people first meet sine and cosine in a basic geometry class. The lesson starts with a right triangle, and derives sine and cosine diagramtically using angles, lines and ratios. This is the "triangle view" of the trigonometric functions.
Most people are suprised to learn that's not the only way to derive them. While the triangle view of sine and cosine is intuitive, it actually conceals many interesting properties. But more importantly, the focus on lines and angles gives many people the impression that sine and cosine are "something from geometry" -- not objects with their own interesting properties and deep connections to other branches of math.
In this post, I'll explain a more elegant way of deriving sine and cosine that uses no geometry. This is the "real analysis" view. We'll start with a single assumption, and develop them analytically based on our earlier discussion of the Taylor series expansion (read here).
Imagine a function with the following property. The function's second derivative is equal to the negative of the function itself. That is,
This is our starting assumption. It's just another way of saying that sine and cosine -- whatever they are -- need to have this one property. If we start with sine or cosine and take their derivative twice, we should end up back at the original function with a negative sign on it. Our goal is to show how to get from this assumption to actual algebraic functions for sine and cosine.
Let's take a closer look at our assumption. It relates the function to its second derivative -- a simple differential equation. But what about the other derivitives? To find them, let's take the next few derivatives of our assumption and see what happens.
Differentiating both sides and solving each time for f'', f''' and so on, we get the following
We can already see the pattern here. As we take more and more derivatives, we always end up back at either f or f'. The only thing that changes is the positive or negative sign in front.
Now that we have n derivitives of f, we're ready to use our earlier finding about Taylor series expansions and write an expression for f.
Recall the Taylor expansion of f around the point c is given by
Letting c = 0 -- which is technically a Maclaurin rather than a Taylor -- we get the following expansion
But as we've seen above, we already have expressions for the second, third and so on derivitives of f. They're all either f, -f, f' or -f'. Substituting these in, we get the following
Notice the pattern in the signs. The expansion starts with two positive terms, then has two negative terms, then has two positive terms, and so on. The pattern repeats forever, and each term involves only f or f'.
Since the function is really based two parameters -- f(0) and f'(0) -- we can rewrite this in a more useful way as follows. Label our two parameters c1 and c2. That is, let f(0) = c1, and let f'(0) = c2. Rewriting with this notation we get
Now let's pick two sets of values for our parameters c1 and c2. These will define two distinct functions, both of which will satisfy our initial assumption.
First, let c1 = 0, and c2 = 1. Plugging into the above, we get the following function for sine
This is the sine function we're looking for. When x = 0, the above function is zero. When x = pi/2 the function is one. And it works for all real values of x.
For our second case, let c1 = 1, and c2 = 0. Plugging this into our expansion, we get the following for cosine:
This is the cosine function we're looking for. When x = 0, the function is one. When x = pi/2, it's equal to zero. And like the sine function above, it works for all .
We can easily check that these both satisfy our initial assumption that f'' = -f. And we can also verify another basic relationship between sine and cosine from calculus by taking the first derivative of each, which is that
Now that we've developed functions for sine and cosine, we can go on to derive all the other trigonometric functions -- tangent, cosecent, and so on -- from these.
For more on the "real analysis" view of sine and cosine, see this famous primer, especially Chapter 7, Section 4.
Here's another method for estimating pi that basically requires a hardwood floor, a bag of sewing needles and plenty of time.
Imagine standing over a hardwood floor. Notice the parallel lines between the boards. Now imagine dropping a single sewing needle onto it. It bounces, and comes to rest. Question: What is the probability the needle will come to rest on a line?
The picture below illustrates the problem. The distance between the lines is marked d, and the length of the needle is marked k. Let's assume k is less than or equal to d. The illustration shows two possible resting points for the needle.
Looking at the picture, it's easy to see how to approach the problem. All we care about is the horizontal "width" of the needle when it comes to rest, relative to the total distance between the lines d. The ratio of the former to the latter will be the probability of landing on a line.
In the picture, the horizontal width of the needle is given by . We use the absolute value since physical length is always positive. This is marked in the picture.
Here's how we reason this out. First, the probability that the needle will land at any particular angle is just . And given the angle it happens to land at, the probability it will cross a line is , which is just the horizontal width of the needle divided by the distance between lines. (Note this only works for k <= d.)
Putting these two probabilities together -- since both events must occur at the same time -- we get the probability that a particular needle at angle theta will cross a line as
But this is only for one theta. What about the others? To find the probability that a dropped needle will hit a line for every possible theta, we have the following integral
Pulling out the constants, we can re-write this as
The key to simplifying this is to note the following relationship between ordinary cosine and the absolute value of cosine. For ordinary cosine, the integral between 0 and 2pi is just zero -- half the time it's positive over that range, and half the time it's negative. But we want the absolute value of cosine, which is always positive.
The picture below illustrates how to translate between the two.
The picture shows regular cosine plotted between 0 and 2pi. In absolute value terms, the areas under the curve marked 1, 2, 3, and 4 are all equal. The two positive areas exactly equal the two negative areas. That's another way of saying that the integral of from 0 to 2pi is equal to four times the integral of from 0 to pi/2.
That means we can re-write our integral as
So the probability that a needle will land on a line is just , as long as k is less than or equal to d.
Now imagine we're clever and choose sewing needles exactly the length of the width of boards in our floor. In that case, k = d. Then the probability it will hit a line becomes
Now we have an expression we can use to estimate pi. The process is simple.
Start tossing needles on the floor. Count how many land on the lines, as a percentage of total throws. That's P(k,d). Then divide this number into 2. The result will be pretty close to pi. The more needles you throw, the better the estimate will get.
I won't bore you with the enormous literature of mathematicians actually trying to estimate pi using this awkward method. But for those who want to see how this works with a Java-based simulation, see here.
The Taylor expansion is one of the most beautiful ideas in mathematics. The intuition is simple: most functions are smooth over ranges we’re interested in. And polynomials are also smooth. So for every smooth function, we should be able to write down a polynomial that approximates it pretty well. And in fact, if our polynomial contains enough terms it will exactly equal the original function. Since polynomials are easier to work with than almost any other kind of function, this usually turns hard problems into easy ones.
The Taylor formula is the key. It gives us an equation for the polynomial expansion for every smooth function f. However, while the intuition behind it is simple, the actual formula is not. It can be pretty daunting for beginners, and even experts have a hard time remembering if they haven’t seen it for a while.
In this post, I’ll explain a quick and easy trick I use to re-derive the Taylor formula from scratch whenever I have trouble remembering it.
Starting from Scratch
The idea behind the Taylor expansion is that we can re-write every smooth function as an infinite sum of polynomial terms. The first step is therefore to write down a general nth-degree polynomial. Here it is:
Where a0, a1, … are coefficients on each polynomial term, and c is a constant that represents where along the x-axis we want to start our approximation (if we don’t care where we start, just let c = 0, which is technically known as a Maclaurin rather than a Taylor). This series -- known as a "power series" -- can be written in closed form as the following:
The goal here is to find a clever way to find the coefficients a0, a1, … in that equation, given some function f and an initial value of c. Here is the logic for doing that.
Polynomials are smooth, so that guarantees they’re differentiable. That is, we can calculate the first, second, third and so on derivatives of them. So starting with our polynomial above, let’s take the first few derivatives of it, like this:
Clearly we’re seeing a pattern already. We’ll use that in a minute. Now that we have n derivatives of f, let’s evaluate them for some number that will cause most of their terms to drop away. This is the key step. If we’re clever, we’ll notice that if we evaluate them at x = c, most of their terms will go to zero. That will leave behind only the coefficients a1, a2, … multiplied by some constant. So here’s that step:
Now we have a set of simple equations we can solve for a1, a2, … Simply divide both sides by n!. That gives us the following:
The pattern here is beautiful. The nth coefficient is just the nth derivative of the original function, evaluated at c, divided by n factorial. Now we have our n coefficients. The next step is to plug them back into our beginning expression for a general nth-degree polynomial, like this:
This equation is what we’re looking for. It gives a polynomial expansion for every smooth function f. We just need to calculate the first n derivatives of f, evaluate them at c, divide each one by n!, and sum up these terms. The result will be a good approximation to our original function. The more terms we add on, the more accurate the polynomial approximation will be.
The Result: the Taylor Formula
The final step is to write this infinite series in closed form. This is the last step in the trick for remembering the formula. Writing the above as a summation, we get our final result:
The lesson here is simple: don’t waste your time learning formulas. Learn methods. If you can remember the basic logic of where the Taylor expansion comes from, you can quickly and easily re-derive the formula from scratch.
To see how this works in a spreadsheet, here's an Excel file that gives a step-by-step process for the Taylor expansion of f(x) = 2^x at x = 3, using 6 polynomial terms.
Most people first encounter the “rule of 72” in a basic finance course. The idea is simple: divide 72 by your growth rate times 100, and the result is the number of years it takes to double the investment. However, there's some interesting math going on behind this rule that most people never learn.
In this post I’ll explain the hidden math of the rule of 72, show you how it’s derived algebraically, and point out some important limitations.
The key idea behind the rule of 72 is that it’s designed for quick mental calculations. Seventy-two gives a decent ballpark approximation for doubling times. But more importantly, 72 is evenly divisible by lots of common growth rates -- 2, 3, 4, 6, 8, 9, 12, 18, and so on. That makes it ideal for back-of-the-envelope approximations.
However, 72 has some major drawbacks. It's inaccurate in many cases, sometimes giving answers that are off by years. Recognizing this, some textbooks offer more accurate (but harder to use) alternatives. Some recommend a “rule of 70,” while others suggest a “rule of 69.” While these are both harder to use, they're considerably more accurate in many cases. As we'll see in a minute, the rule of 69 is probably the most mathematically defensible of all, despite being pretty useless for mental calculation.
Deriving the Optimal Rule of Thumb
So where do these rules come from? In this section, I'll show you how they're derived algebraically from basic compound interest formulas.
When we say we're looking for a rule of thumb for doubling time, here's what we really mean. We want some constant M (like 72, 70, 69, etc.) that we can divide by our growth rate times 100 and get a good estimate of the time to double an investment. The first step is to write an expression for the value of the investment at time t, given some initial value a0. Then we set this expression equal to twice the initial value, or 2a0. That is
The a0s cancel out on both sides of the above, leaving us with
Taking the natural log of both sides and solving for t, we get
This expression gives the time t it takes to double the investment, given some r and k. What we want is a number M such that when we divide by 100r, it gives us t. That is, we want an M such that M/(100r) = t. Solving for M, we see that M = t(100r). So our rule of thumb M is simply given by 100r times our equation for t above, or
As you can see, the optimal rule of thumb M is a function of both r and k. So for different growth rates and frequency of compounding, you’ll want different rules of thumb. The ubiquitous M=72 is really only a good rule of thumb for a very small number of cases.
The graph below is a plot of the function M(r,k) for r between 1 percent and 30 percent and k between 1 and 30. R and k are shown on the horizontal axes, and the optimal rule M is given on the vertical axis. The colored bands show areas where each rule is optimal. For example, the light green band near the center shows where the rule of 70 is optimal, and the flat red plane shows where the rule of 69 is best, and so on.
As you can see, the optimal rules are all over the map for small values of k. In the extreme case of k=1, the usual rules of 70 and 72 are only close for growth rates between 1 percent and 10 percent. Beyond that they quickly break down. When k=1, the optimal rule M grows linearly as r increases, so no rule holds up well in that range. Overall, 72 looks pretty lousy from this perspective -- it's the small aqua-blue band in the center-left. Seventy looks slightly better -- the green band corresponding to 70 covers much more area -- but it also only works over a pretty small range.
A Special Case: The Rule for Continuous Compounding
The rule that seems to hold up best is actually the awkward rule of 69, which corresponds to the broad, flat plane of red on the right-hand side of the graph. It works for a huge range of values as the function flattens out to the right and roughly approaches 69. Beyond k=15, the graph is essentially flat regardless of the growth rate r. This is no surprise, as the rule of 69 turns out to be a special case for continuously compounded growth rates.
With continuously compounded growth, k is very large. To see how this affects our optimal rule M, let's return to our first equation for the time to double our investment.
If growth is continuously compounded, we have the following equation for the value at time t
Again, the a0s cancel out on both sides, giving us
Solving for t we get
Since we're looking for an M such that M/(100r)=t, or M=(100r)t, we get the following optimal rule M
So with continuously compounded growth, the optimal rule becomes pretty simple. Just divide (100)ln 2 -- or roughly 69.3 -- by your growth rate times 100. In the graph, that’s the value our function M is approaching as it flattens out to the right. Put differently, the limit of M as k approaches infinity is (100)ln 2 or about 69.3.
We can easily see this result analytically by taking the limit of the function M as k grows to infinity. That is, we can find the following limit
To work out this limit, we need to do some tricky algebra to the denominator. First, let's move the k from the left side up into the exponent on the log to its right. Then let's define a new number x such that x = k/r. This trick will let us collapse most of the denominator into the constant e, using the fact that e equals the limit of (1 + 1/x)^x as x grows to infinity.
Substituting in x = k/r, we get
Since x = k/r, letting k grow to infinity is equivalent to letting x grow to infinity. But we know that the term in the denominator (1+1/x)^x just becomes e as x grows to infinity. So we have
And that's the result we alluded to above. As k grows, the function M converges very quickly to 69.314... So that's the optimal rule for a huge number of values for r and k -- infinitely many. So in some long-run mathematical sense, the most defensible rule actually turns out to be the awkward and practically useless rule of 69.
To see how this argument works in a spreadsheet, here's an Excel file with the equations, data and graph from above.
Here's a method for estimating pi that basically requires a dartboard, faith in the relative frequency interpretation of probability, and plenty of time.
Step one, build yourself a dartboard that looks like this:
The area of the circular dart board is . And the area of the square behind the board is .
Imagine throwing darts at this board. What's the probability it will land inside the circle? Easy enough -- it's the area of the circle divided by the area of the square, or
Now we have an expression we can use to estimate pi. The process is simple.
Start throwing darts at the board. Be sure to throw randomly -- you'll need a uniform distribution of throws. Count the percentage that fall inside the circle. Then multiply this percentage by four. The result will be pretty close to pi. The more darts you throw, the better the estimate will get.
Here's a simple piece of Matlab code that will do this simulation for you. For those who want to see how this works in a spreadsheet, here's an Excel file with a simulation for n = 500 and n = 10,000 using random numbers from random.org for the x and y coordinates of the dart throws.
Update: A reader via email suggests another interesting method known as "Buffon's Needle Problem," which you can read about here.
(Note: see here for a simpler version of this proof.)
Economic theory relies pretty heavily on a handful of mathematical proofs. One of the most important for proving the existence of equilibria is the Bolzano-Weierstrass theorem.
In this post I'll give a basic explanation of the Bolzano-Weierstrass theorem, and walk you through a simple proof of it.
Bolzano-Weierstrass is a theorem about sequences of real numbers. Sequences can either be convergent -- that is, they can approach some long-run value -- or they can be divergent and blow up to infinity. For many equilibrium concepts, it's necessary to show that a given sequence is convergent and stays within some reasonable set of values as we move far out along the sequence in the long run.
Bolzano-Weierstrass basically says that we'll always end up with a convergent subsequence if we start with the right kind of sequence to begin with. Specifically, it says that if we start with a closed and bounded sequence -- that is, if our sequence is finite and contains its own endpoints -- then it's guaranteed to have a subsequence that converges. Often that means the equilibrium we're interested in actually exists, which is a good thing.
In the rest of this post, I'll walk through a typical proof of the theorem. Different variations on this basic proof are used pretty widely in the more mathematical branches of economics such as game theory and general equibrium theory.
Proving the Theorem
Here's the formal statement of the Bolzano-Weierstrass theorem: Every closed and bounded real sequence has a convergent subsequence.
Proof. Let be a bounded real sequence. Then there exists a closed interval [c, d] such that for all positive integers n -- that is, for all .
Now divide the interval [c,d] into the following two intervals:
Since the sequence is bounded, infinitely many terms of it are contained in the original interval [c,d]. Now that we've split that interval in two, one of those two intervals must contain for infinitely many positive integers n. Call this interval .
Now repeat this step, dividing into the following:
Similarly, one of these intervals must also contain for infinitely many positive integers n. Call this . Continuing this process, we get a sequence of intervals such that
where each of these intervals contains for infinitely many positive integers n, and the width of each of the k intervals is given by
Now pick a positive integer such that . Since the next interval contains for infinitely many positive integers n, we can now choose an such that . Continuing this process, we can obtain elements such that
This means the newly created sequence is a subsequence of . So we've established that every bounded real sequence has a subsequence. The next step is to go further and show that this subsequence actually converges.
To see why this subsequence converges, consider the illustration below. It shows the original interval [c, d] at the top, with the subsequent intervals [c1, d1], [c2, d2], ...] we created by repeatedly dividing it below.
Note that each interval is half the length of the previous interval, and that the boundaries of each interval are always moving inward or staying the same -- they're never moving outward. That is, the sequence of lower bounds is always moving inward as k grows. Similarly, the sequence of upper bounds is always pushing inward as k grows. That means both the lower and upper bounds are monotone and bounded. And that guarantees they are both convergent.
Let's label the value that the lower bounds converge to as . Similarly, let's label the value the upper bounds converge to as . Using our equation for the width of each interval from above, we know that
Taking the limit of both sides, we see that
So this shows that both the upper and lower bounds converge to the same value of L = M.
We saw above that , so that means for all k. But since and converge to the same limit of L = M, by the so-called "squeeze theorem" we know that must also converge to the same limit of L = M.
That shows that every closed and bounded real sequence -- that is, every sequence defined on a compact set -- has a subsequence , and that this subsequence is convergent. And that completes the proof.
For more on Bolzano-Weierstrass, check out the Wiki article here.
Each year, some fraction of Seattle's population moves to Portland. And some fraction of Portland's population moves back to Seattle. If this keeps up year after year, what happens in the long run? Will the populations reach a steady state, or will one city empty out into the other?
These are questions we can answer with simple linear algebraic models. In this post I'll explain how to set up models for problems like this, and how to solve them for their long-run values.
Setting Up the Problem
Here are some basic facts. Imagine Seattle and Portland both have populations of 500,000. Each year, 5 percent of Seattle's population moves to Portland and 20 percent of Portland's population moves to Seattle. The rest don't move. This pattern repeats every year, forever.
The first step is to write this as a linear system. The idea is that we have a system -- in this case, the population in both cities -- that starts at an initial state u0 at time t = 0. At time t = 1, some transformation happens, taking us to state u1. Then the transformation is repeated at time t = 2, 3, ... bringing us to states u2, u3, and so on. The question we're after is what does the system look like after k steps, and what happens in the long run when k gets large?
In our problem, we can write the initial state u0 as a vector of populations in the two cities. The first entry is Seattle's population and the second entry is Portland's:
The population shifts between the two cities -- that is, the transformation that happens at time t = 1, 2, ... -- can be described by this matrix:
Here's how to read this. The first column and first row describe Seattle's population, and the second column and second row describe Portland's. Reading down the first column, it says 95 percent of Seattle's population stays in Seattle, while 5 percent moves to Portland. Similarly, the second column tells us that 20 percent of Portland's population moves to Seattle, while 80 percent stays at home. Since the columns of A all sum to 1, this is what's known as a Markov matrix.
Putting these together, our model works like this. Start with the vector describing the initial state, u0. To move to state u1, we multiply by the matrix A. That is,
To move to state u2, u3, ... we again multiply by A, or
That's our basic model describing the population in the two cites after k steps. In the rest of this post, I'll explain how to solve it for the long-run equilibrium.
For any linear system, the key to understanding how it behaves over time is through eigenvalues and eigenvectors. We need to get to the bottom of what the matrix A is doing to the system each time it is multiplied. The way to see that is by examining A's eigenvalues and eigenvectors.
The first step is to find the eigenvalues of A. Since we've got a 2x2 matrix, we'll normally expect to find two of them.
Here's how to find them. Start with the definition of eigenvalues as,
To solve for we first rewrite this as
Looking at this expression, we definitely know that the matrix isn't invertible for any non-zero vector x. If isn't invertible, that means its determinant must be zero. Using that fact, we can solve for the eigenvalues.
In our case, we have a 2x2 matrix which has a pretty simple determinant,
That last equation is called the "characteristic polynomial" of A. It's what we solve to find the eigenvalues. In this case, it's a quadratic since we're working with a 2x2 matrix, which will have two solutions. If the problem was for n cities instead, we'd end up with a characteristic polynomial of degree n.
In this case, we have an easy solution to the polynomial:
These two s are the two eigenvalues of A.
It's no coincidence that one of our eigenvalues is 1. That's always the case with Markov matrices. And that makes the characteristic polynomial easy to solve. We always know that the trace of A -- the sum of A's elements down the diagonal -- is equal to the sum of the eigenvalues. If one of them is 1, then the other must be 1.75 minus 1 or .75. Those are our two solutions, so we can immediately factor and solve.
Now that we have the two eigenvalues, the next step is to find the eigenvectors x1 and x2. Here's how we do that. For each eigenvalue and , we find the null space of the matrix . That is, we find the vectors x that make .
We do this by using row reduction on until we get it in row echelon form. Then we can pick off the pivots and free variables, and work out the eigenvectors.
First take the case of . Then the matrix becomes
Adding row one to row two, we get
So our equation becomes,
Since -.05 is attached to our "pivot" and .20 is attached to our "free" variable, we set our free variable x2 equal to 1 and solve for the pivot variable x1, or
So our first eigenvector x1 is
Following the same procedure for the other eigenvalue , we find the second eigenvector x2 is
Creating S and Λ Matrices
The next step is to form a new matrix S which has the eigenvectors found above as its columns. That is, let
Also, let's form a new matrix which has the eigenvalues we found above along the diagonal and zeros elsewhere. That is, let
We know from the definition of eigenvalues and eigenvectors, . Now that we've got our eigenvectors in the columns of the matrix S, and our eigenvalues are along the diagonal of , we can rewrite our basic relationship as
Solving this for A, we can see an important relationship as the system moves from one state to the next:
Note what happens to this equation if we multiply by A over and over, moving from the first state u1 to the second, third, and so on:
Our original model was in the form . But now we've found that just equals . So we can rewrite our model as
But we can compress this model even further. It's possible to write the initial state vector u0 as a linear combination of the two eigenvectors x1 and x2. That is, we can write
which can be solved for c as
This expression for c can now be substituted back into the model above to get
This is the model we've been working to find. It says the kth state of our model is equal to the matrix of eigenvectors S times the matrix of eigenvalues raised to the power of k, times some vector c that gives combinations of them.
Interpreting the Model
Expanding out the expression , we can see better why this is a useful way to model our population problem,
What this is saying is that the kth state of our model is just the sum of each of the eigenvectors, times their corresponding eigenvalues raised to the power of k, times some constant c. Written this way, it's easy to see where this model is headed in the long run. All changes as we move forward in time are determined by the eigenvalues and .
If the absolute value of an eigenvalue is less than one -- that is, if -1 < < 1 -- it will rapidly decay and go to zero as k grows. If the absolute value is > 1, it will explode and grow without bound as k grows. And if it equals one, it's what we call a "steady state" of the model since it remains unchanged as k grows.
In our case, because we set up our problem using population proportions that always sum to one, our matrix A was a Markov matrix. It always has an eigenvalue that equals one, and the rest have absolute values less than one. That means our model is guaranteed to have a steady state, as any other factors influencing the model will decay and disappear over time as k grows.
Plugging in the values for our population problem above, here's our actual solution,
At this point, it's easy to see what will happen to this model in the long run. Let k grow very large. The term on the right will rapidly shrink to zero -- this is the "transitory" component of the model. The term on the left just gets multiplied over and over by 1. So that's our steady state, or
This is the long-run equilibrium for our model. What it means is that starting with populations of 500,000 each in Seattle and Portland, migration slowly eroded Portland and expanded Seattle. And in the long run, Seattle's population levels off at 800,000 and Portland's shrinks to 200,000.
For those who want to see how this works in an Excel spreadsheet, here's a file with the above model.
[Here's a post where I explain how to use this method to generate the (k+1)th term of the Fibonacci sequence given some input k.]
Linear regression is the most important statistical tool most people ever learn. However, the way it's usually taught makes it hard to see the essence of what regression is really doing.
Most courses focus on the "calculus" view. In this view, regression starts with a large algebraic expression for the sum of the squared distances between each observed point and a hypothetical line. The expression is then minimized by taking the first derivative, setting it equal to zero, and doing a ton of algebra until we arrive at our regression coefficients.
Most textbooks walk students through one painful calculation of this, and thereafter rely on statistical packages like Stata or Excel -- practically inviting students to become dependent on software and never develop deep intuition about what's going on.
This is the way people who don't understand math teach regression. In this post I'll illustrate a more elegant view of least-squares regression -- the so-called "linear algebra" view.
The goal of regression is to fit a mathematical model to a set of observed points. Say we're collecting data on the number of machine failures per day in some factory. Imagine we've got three data points:
(day, number of failures)
The goal is to find a linear equation that fits these points. We believe there's an underlying mathematical relationship that maps "days" uniquely to "number of machine failures," or in the form , where b is the number of failures per day, x is the day, and C and D are the regression coefficients we're looking for.
We can write these three data points as a simple linear system like this:
For the first two points the model is a perfect linear system. When x = 1, b = 1; and when x = 2, b = 2. But things go wrong when we reach the third point. When x = 3, b = 2 again, so we already know the three points don't sit on a line and our model will be an approximation at best.
Now that we have a linear system we're in the world of linear algebra. That's good news, since it helps us step back and see the big picture. Rather than hundreds of numbers and algebraic terms, we only have to deal with a few vectors and matrices.
Here's our linear system in the matrix form Ax = b:
What this is saying is that we hope the vector b lies in the column space of A, C(A). That is, we're hoping there's some linear combination of the columns of A that gives us our vector of observed b values. Unfortunately, we already know b doesn't fit our model perfectly. That means it's outside the column space of A. This is just another way of saying A is not an invertible matrix. It's not possible for us to find an A^-1. So we can't simply solve that equation for the vector x.
Let's look at a picture of what's going on. In the drawing below the column space of A is marked C(A). It forms a flat plane in three-space. If we think of the columns of A as vectors a1 and a2, the plane is all possible linear combinations of a1 and a2. These are marked in the picture.
By contrast, the vector of observed values b doesn't lie in the plane. It sticks up in some direction, marked "b" in the drawing. The plane C(A) is really just our hoped-for mathematical model. And the errant vector b is our observed data that unfortunately doesn't fit the model.
So what should we do? The linear regression answer is that we should forget about finding a model that perfectly fits b, and instead swap out b for another vector that's pretty close to it but that fits our model. Specifically, we want to pick a vector p that's in the column space of A, but is also as close as possible to b.
The picture below illustrates the process. Think of shining a flashlight down onto b from above. This casts a shadow onto C(A). This is the projection of the vector b onto the column space of A. This projection is labeled p in the drawing.
The line marked e is the "error" between our observed vector b and the projected vector p that we're planning to use instead. The goal is to choose the vector p to make e as small as possible. That is, we want to minimize the error between the vector p used in the model and the observed vector b.
In the drawing, e is just the observed vector b minus the projection p, or b - p. And the projection itself is just a combination of the columns of A -- that's why it's in the column space after all -- so it's equal to A times some vector x-hat.
To minimize e, we want to choose a p that's perpendicular to the error vector e, but points in the same direction as b. In the figure, the intersection between e and p is marked with a 90-degree angle. The geometry makes it pretty obvious what's going on. We started with b, which doesn't fit the model, and then switched to p, which is a pretty good approximation and has the virtue of sitting in the column space of A.
Solving for Regression Coefficients
Since the vector e is perpendicular to the plane of A's column space, that means the dot product between them must be zero. That is,
But since e = b - p, and p = A times x-hat, we get,
Solving for x-hat, we get
The elements of the vector X-hat are the estimated regression coefficients C and D we're looking for. They minimize the distance e between the model and the observed data in an elegant way that uses no calculus or explicit algebraic sums.
Here's an easy way to remember how this works. Doing linear regression is just trying to solve Ax = b. But if any of the observed points in b deviate from the model, A won't be an invertible matrix. So instead we force it to become invertible by multiplying both sides by the transpose of A. The transpose of A times A will always be square and symmetric, so it's always invertible. Then we just solve for x-hat.
There are other good things about this view as well. For one, it's a lot easier to interpret the correlation coefficient r. If our x and y data points are normalized about their means -- that is, if we subtract their mean from each observed value -- r is just the cosine of the angle between b and the flat plane in the drawing.
Cosine ranges from -1 to 1, just like r. If the regression is perfect, r = 1, which means b lies in the plane. If b lies in the plane, the angle between them is zero, which makes sense since cos 0 = 1. If the regression is terrible, r = 0, and b points perpendicular to the plane. In that case, the angle between them is 90 degrees or pi/2 radians. This makes sense also, since the cos (pi/2) = 0 as well.
For those who want to try this at home, here's a handy Excel file summarizing the argument here, along with the cut-and-paste formulas you'll need in Excel.
(See this post at Chamberlain Economics, LLC.)
All good economics starts with theory. The world is a complicated place—far too complex to make sense of directly. Economic theory helps collapse that complexity into a few key relationships we can work out mathematically and check against the facts. The first step in every analysis is to sit down with pencil and pad to work out the theory.
To help our clients better understand the economic theory underlying our work, we’ll be posting an ongoing series of articles titled “Core Concepts” at the Chamberlain Economics, L.L.C. site. The goal is to provide a collection of simple and brief introductions to the core theoretical concepts used in our work.
As the first in the series we’ve posted “Core Concepts: The Economics of Tax Incidence”. The piece is designed as a refresher on the basics of tax incidence, and how it’s derived analytically from elasticities of supply and demand in the marketplace. This idea serves as the foundation for nearly all of our work on tax modeling and policy analysis.
(See this article at Chamberlain Economics, L.L.C. also.)
One of the hard parts about building Leontief input-output models is that the source data are hard to use.
Instead of producing a square industry-by-industry input-output table, the Bureau of Economic Analysis (BEA) produces rectangular “use” and “make” tables. The make table shows products produced by each industry, while the use table shows how products get used by industries, consumers, government and the rest of the world. However, what we need for Leontief models is a square table that shows only the industry-by-industry relationships.
I’ve you’re a researcher who has run into this problem, I have good news. I’ve produced summary-level (134 industries) and detailed-level (426 industries) input-output tables from BEA data, which are now for sale at Chamberlain Economics, L.L.C. I’ve also written up a methodology paper explaining how the tables are derived, allowing you to reproduce them quickly and easily. See here to place an order today. | http://www.the-idea-shop.com/ | 13 |
30 | Group decision making
Group decision making (also known as collaborative decision making) is a situation faced when individuals collectively make a choice from the alternatives before them. This decision is no longer attributable to any single individual who is a member of the group. This is because all the individuals and social group processes such as social influence contribute to the outcome. The decisions made by groups are often different from those made by individuals. Group polarization is one clear example: groups tend to make decisions that are more extreme than those of its individual members, in the direction of the individual inclinations.
There is much debate as to whether this difference results in decisions that are better or worse. According to the idea of synergy, decisions made collectively tend to be more effective than decisions made by a single individual. However, there are also examples where the decisions made by a group are flawed, such as the Bay of Pigs Invasion, the incident on which the Groupthink model of group decision making is based.
Factors that impact other social group behaviours also affect group decisions. For example, groups high in cohesion, in combination with other antecedent conditions (e.g. ideological homogeneity and insulation from dissenting opinions) have been noted to have a negative effect on group decision making and hence on group effectiveness. Moreover, when individuals make decisions as part of a group, there is a tendency to exhibit a bias towards discussing shared information (i.e., shared information bias), as opposed to unshared information.
Group Decision Making in Psychology
The social identity approach suggests a more general approach to group decision making than the popular Groupthink model which is a narrow look at situations where group decision making is flawed. Social identity analysis suggests that the changes which occur during collective decision making is part of rational psychological processes which build on the essence of the group in ways that are psychologically efficient, grounded in the social reality experienced by members of the group and have the potential to have a positive impact on society.
Formal Systems
- Consensus decision-making tries to avoid "winners" and "losers". Consensus requires that a majority approve a given course of action, but that the minority agree to go along with the course of action. In other words, if the minority opposes the course of action, consensus requires that the course of action be modified to remove objectionable features.
- Voting-based methods
- Range voting lets each member score one or more of the available options. The option with the highest average is chosen. This method has experimentally been shown to produce the lowest Bayesian regret among common voting methods, even when voters are strategic.
- Majority requires support from more than 50% of the members of the group. Thus, the bar for action is lower than with unanimity and a group of "losers" is implicit to this rule.
- Plurality, where the largest block in a group decides, even if it falls short of a majority.
- Delphi method is structured communication technique for groups, originally developed for collaborative forecasting but has also been used for policy making
- Dotmocracy is a facilitation method that relies on the use of special forms called Dotmocracy Sheets to allow large groups to collectively brainstorm and recognize agreement on an unlimited number of ideas they have authored.
Decision making in groups is sometimes examined separately as process and outcome. Process refers to the group interactions. Some relevant ideas include coalitions among participants as well as influence and persuasion. The use of politics is often judged negatively, but it is a useful way to approach problems when preferences among actors are in conflict, when dependencies exist that cannot be avoided, when there are no super-ordinate authorities, and when the technical or scientific merit of the options is ambiguous.
In addition to the different processes involved in making decisions, group decision support systems (GDSS) may have different decision rules. A decision rule is the GDSS protocol a group uses to choose among scenario planning alternatives.
- Gathering involves all participants acknowledging each other's needs and opinions and tends towards a problem solving approach in which as many needs and opinions as possible can be satisfied. It allows for multiple outcomes and does not require agreement from some for others to act.
- Sub-committee involves assigning responsibility for evaluation of a decision to a sub-set of a larger group, which then comes back to the larger group with recommendations for action. Using a sub-committee is more common in larger governance groups, such as a legislature. Sometimes a sub-committee includes those individuals most affected by a decision, although at other times it is useful for the larger group to have a sub-committee that involves more neutral participants.
- Participatory, where each actor would have a say in decisions directly proportionate to the degree that particular decision affects him or her. Those not affected by a decision would have no say and those exclusively affected by a decision would have full say. Likewise, those most affected would have the most say while those least affected would have the least say.
Plurality and dictatorship are less desirable as decision rules because they do not require the involvement of the broader group to determine a choice. Thus, they do not engender commitment to the course of action chosen. An absence of commitment from individuals in the group can be problematic during the implementation phase of a decision.
There are no perfect decision making rules. Depending on how the rules are implemented in practice and the situation, all of these can lead to situations where either no decision is made, or to situations where decisions made are inconsistent with one another over time.
Social decision schemes
Sometimes, groups may have established and clearly defined standards for making decisions, such as bylaws and statutes. However, it is often the case that the decision making process is less formal, and might even be implicitly accepted. Social decision schemes are the methods used by a group to combine individual responses to come up with a single group decision. There are a number of these schemes, but the following are the most common:
- Delegating decisions: where an individual, subgroup, or external party makes the decision for the group. For instance, in an authority scheme, the leader makes the decision; or in an oligarchy, a coalition will make the decision.
- Averaging decisions: this is when each individual member makes an independent private decision, which are later averaged together to give a nominal group decision.
- Plurality decisions: where members of the group vote on their preferences, either privately or publicly. The decision will then be based on these votes in a majority-rules scheme, in a more substantial two-thirds majority scheme, or in a Borda count method that uses ranking methods.
- Unanimous decisions: this is a consensus scheme, whereby the group discusses the issue until it reaches a unanimous agreement. This decision rule is what dictates the decision making for most juries.
- Random decision: in this type of scheme, the group will leave the choice up to chance. For example, picking a number between 1 and 10, or flipping a coin.
There are strengths and weaknesses to each of these social decision schemes. Delegation saves time and is a good method for less important decisions, but ignored members might react negatively. Averaging responses will cancel out extreme opinions, but the final decision might disappoint many members. Plurality is the most consistent scheme when superior decisions are being made, and it involves the least amount of effort. Voting, however, may lead to members feeling alienated when they lose a close vote, or to internal politics, or to conformity to other opinions. Consensus schemes involve members more deeply, and tend to lead to high levels of commitment. But, it might be difficult for the group to reach such decisions.
Normative Model of Decision Making
Groups have many advantages and disadvantages when making decisions. Groups, by definition, are composed of two or more people, and for this reason naturally have access to more information and have a greater capacity to process this information. However, they also present a number of liabilities to decision making, such as requiring more time to make choices and by consequence rushing to a low quality agreement in order to be timely. Some issues are also so simple that a group decision-making process leads to too many cooks in the kitchen: for such trivial issues, having a group make the decision is overkill and can lead to failure. Because groups offer both advantages and disadvantages in making decisions, Victor Vroom developed a normative model of decision making that suggests different decision making methods should be selected depending on the situation. In this model, Vroom identified five different decision making processes.
Decide- Here, the leader of the group uses other group members as sources of information, but makes the final decision independently, and does not explain to group members why she/he requires that information.
Consult (individual)- The leader talks to each group member alone, never consulting with the entire group as a whole. She/he then makes the final decision in light of this individually-obtained information.
Consult (group)- The leader consults the entire group at once, asking for opinions and information, and then comes to a decision.
Facilitate- In this strategy, the leader takes on a cooperative holistic approach, collaborating with the group as a whole as they work toward a unified and consensual decision. The leader is non-directive, and never imposes a particular solution on the group. In this case, the final decision is the one made by the group, and not the leader.
Delegate- The leader takes a backseat approach, passing the problem over to the group. The leader is supportive, but allows the group to come to a decision without their direct collaboration.
Decision Support Systems
The idea of using computerized support systems is discussed by James Reason under the heading of intelligent decision support systems in his work on the topic of human error. James Reason notes that events subsequent to The Three Mile accident have not inspired great confidence in the efficacy of some of these methods. In the Davis-Besse accident, for example, both independent safety parameter display systems were out of action before and during the event.
Decision making software is essential for autonomous robots and for different forms of active decision support for industrial operators, designers and managers.
Due to the large number of considerations involved in many decisions, computer-based decision support systems (DSS) have been developed to assist decision makers in considering the implications of various courses of thinking. They can help reduce the risk of human errors. DSSs which try to realize some human/cognitive decision making functions are called Intelligent Decision Support Systems (IDSS), see for ex. "An Approach to the Intelligent Decision Advisor (IDA) for Emergency Managers, 1999". On the other hand, an active/intelligent DSS is an important tool for the design of complex engineering systems and the management of large technological and business projects, see also: "Decision engineering, an approach to Business Process Reengineering (BPR) in a strained industrial and business environment".
Group Discussion Pitfalls
Groups have greater informational and motivational resources, and therefore have the potential to outperform individuals. However they do not always reach this potential. Groups often lack proper communication skills. On the sender side this means that group members may lack the skills needed to express themselves clearly. On the receiver side this means that miscommunication can result from information processing limitations and faulty listening habits of human beings.
It is also the case that groups sometimes use discussion to avoid rather than make a decision. Avoidance tactics include the following:
• Procrastination. Replacing high priority tasks with tasks of lower priority. The group postpones the decision rather than studying the alternatives and discussing their relative merits.
• Bolstering. The group may quickly or arbitrarily formulate a decision without thinking things through to completion. They then bolster their decision by exaggerating the favorable consequences of the decision and minimizing the importance of unfavorable consequences.
• Denying responsibility. The group delegates the decision to a subcommittee or diffuses accountability throughout the entire group, thereby avoiding responsibility.
• Muddling through. The group muddles through the issue by considering only a very narrow range of alternatives that differ to only a small degree from the existing choice.
• Satisficing. A combination of satisfy and suffice. Members accept a low risk, easy solution instead of searching for the best solution.
• Trivializing the discussion. The group will avoid dealing with larger issues by focusing on minor issues.
Two fundamental laws that groups all too often obey:
Parksinson’s Law: A task will expand to fill the time available for its completion. (Ex: Groups that plan to meet for an hour stay for the duration).
Law of triviality: The amount of time a group spends discussing an issue will be in inverse proportion to the consequentiality of the issue (Ex: Committee discusses $20 million stadium fund for 3 minutes).
Cognitive Limitations and Subsequent Errors Individuals in a group decision-making setting are often functioning under substantial cognitive demands. As a result, cognitive and motivational biases can often impact group decision making. According to, there are three categories of potential biases that a group can fall victim to when engaging in decision-making.
1. Sins of Commission: The misuse, or inappropriate use of information. These can include: a) Belief perseverance: when a group utilises information in their decision making, which has already been deemed inaccurate b)Sunk cost bias: when a group remains committed to a given plan of action solely because an investment has already been made in that given plan, despite its usefulness c)Extra-evidentiary bias: A group choosing to rely on information, despite being explicitly told it should be ignored d)Hindsight bias: When group members falsely over-estimate how accurate their past knowledge of a given outcome
2. Sins of Omission: The overlooking of useful information. These can include: a)Base rate bias: When group members ignore applicable information they have concerning basic trends/tendencies b)Fundamental attribution error: When group members base their decisions on inaccurate appraisals of individuals behaviour
3. Sins of Imprecision: Relying too heavily on heuristics, which over-simplify complex decisions. These can include: a) Availability heuristic: when group members rely on information that is readily available, in making a decision. b)Conjunctive bias: When groups are not aware that the probability of one event occurring will always be greater than the probability of two events occurring together. c) Representativeness heuristic: when group members rely too heavily on decision-making factors that seem meaningful, but in reality are somewhat misleading.
Indeed, in a group-decision making context, it would be beneficial for group members to be cognisant of the aforementioned biases and errors which may affect their ability to make informed and tactful decisions.
See also
- Moscovici, S.; M. Zavalloni (1969). "The group as a polarizer of attitudes". Journal of Personality and Social Psychology 12 (2): 125–135. DOI:10.1037/h0027568
- Janis, I. L. (1972). Victims of groupthink. Boston, MA: Houghton-Mifflin.
- Haslam, S. A. (2001). Psychology in organisations: The social identity approach. London: Sage. p. 177.
- Hastie; Kameda (2005). Missing or empty
- Davis et al. (1988). Missing or empty
- Kameda et al. (2002). Missing or empty
- Forsyth, D. R. (2006). Decision Making. In Forsyth, D. R. , Group Dynamics (5th Ed.) (P. 317-349) Belmont: CA, Wadsworth, Cengage Learning.
- Vroom, V. H. (2003). Educating managers in decision making and leadership. Management Decision, 10, 968–978
- James Reason (1990). Human Error. Ashgate. ISBN 1-84014-104-2 | http://en.wikipedia.org/wiki/Group_decision_making | 13 |
23 | A Brainstorming Activity for ESL/EFL StudentsHall Houston
City University of Hong Kong, English Language Centre (Hong Kong, China)
hallhouston ( at ) yahoo.com
IntroductionWhile brainstorming is a commonplace activity for generating new ideas, many students have not had guided practice. This lesson will enable them to brainstorm more effectively.
Lesson1. Ask the class, "How do artists and businesspeople come up with new ideas?" Give them a couple of minutes to think, then call on a few students to give you their answers.
2. Tell the class you are going to do an activity called brainstorming. Ask students to raise their hands if they have ever participated in a brainstorming session before, then call on anyone with their hand up to describe their experience.
3. First, put the students into two groups, Team A and Team B. Assign one student in each group to be a leader. Give the team leaders the following slips:
|Team A - Leader:|
Your job is to encourage the other students to contribute ideas on how to improve this English class. However, you do not want to waste any time. If a student states an idea which seems useless, tell the student "That’s no good" or "Bad idea", then move on to another student.
|Team B - Leader:|
Your job is to encourage the other students to contribute ideas on how to improve this English class. Ask one student in the group to write down all ideas. Offer praise for everyone's contributions and don't criticize any of the ideas. Make sure all ideas are written down.
4. Give students ten minutes to do the brainstorming activity.
5. Now ask for feedback. Which group produced more ideas? Which group enjoyed the activity more?
6. Ask both group leaders to read out their slips of paper. Ask the class to guess which one was brainstorming the right way.
7. Write these rules of successful brainstorming on the board:
- write down all ideas
- the more ideas, the better
- wild, unusual ideas are welcome
- feel free to take someone else's idea and expand on it
- save criticism until the end of the session
9. When they are finished, have both groups choose their three best ideas and write them up on the board. Ask a few students how they feel about brainstorming.
10. Now, tell the class you want to arrange a brainstorming activity for a future lesson. (Note: While it might be tempting to turn this discussion into another brainstorming activity, I would recommend changing the format to add variety to the lesson.)
Give each student an index card and tell them you want them to write the answers to the following questions:
- Topic: Which topic should we work on?
- Location: Where should we have our brainstorming activity?
- Time: When should it take place?
- Duration: How long should we spend brainstorming?
- Number of groups: How many brainstorming groups will we have?
- Participants: Who will be in each group?
- Leader: Who will be the leader of each group?
- Secretary: Who will take notes in each group?
- Other considerations: What else would make our brainstorming activity more productive (music, pictures, snacks, etc.)?
- End product: How should we present our best ideas (poster, presentation, role play, short essay)?
11. Take notes and mark your calendar for the brainstorming session.
12. After the students' scheduled brainstorming session, ask for some feedback. What did they like about the brainstorming session? What could have been improved? Would they like to do more brainstorming in a future lesson?
ConclusionBrainstorming is just one of a wide range of creativity exercises you can use to develop students' thinking skills. If you are interested, you can look into other variations of brainstorming such as rolestorming (students role play different characters to brainstorm), brainwriting (students write down ideas and pass them on to other students who expand on them) and reverse brainstorming (students come up with ideas for producing a problem instead of solving it). Better yet, you might come up with your own twist on brainstorming.
The Internet TESL Journal, Vol. XII, No. 12, December 2006 | http://iteslj.org/Lessons/Houston-Brainstorming.html | 13 |
24 | Please Read How You Can Help Keep the Encyclopedia Free
Proper names are familiar expressions of natural language. Their semantics remains a contested subject in the philosophy of language, with those who believe a descriptive element belongs in their meaning (whether at the level of intension or at the level of character) ranged against supporters of the more austere Millian view.
- 1. Syntax
- 2. Semantics
- Academic Tools
- Other Internet Resources
- Related Entries
Proper names are distinguished from proper nouns. A proper noun is a word-level unit of the category noun, while proper names are noun phrases (syntagms) (Payne and Huddleston 2002, 516). For instance, the proper name ‘Jessica Alba’ consists of two proper nouns: ‘Jessica’ and ‘Alba’. Proper names may consist of other parts of speech, too: ‘Brooklyn Bridge’ contains the common noun ‘Bridge’ as well as the proper noun ‘Brooklyn’. ‘The Raritan River’ also includes the determiner ‘the’. ‘The Bronx’ combines a determiner and a proper noun. Finally, ‘the Golden Gate Bridge’ is a proper name with no proper nouns in it at all.
While any string of words (or non-words) can be a proper name, we may (tentatively) locate that liberality in the form of proper nouns. Proper names, by contrast, simply have a large number of paradigms corresponding to the sorts of things named (Carroll 1985). For instance, official names of persons in most Western cultures consist of (at least) first and last names (themselves proper nouns). Names of bridges have an optional definite determiner and often contain the common noun ‘bridge’. Hence we can have bridge names that embed other proper names like ‘The George Washington Bridge’. We can also have structurally ambiguous names like ‘the New New York Public Library’.
Names are often (Geurts 1997, Anderson 2006) claimed to be syntactically “definite,” since they can occur with markers of definiteness, such as the definite article in English. Since definite expressions include pronouns, demonstratives and definite descriptions, this evidence is often used to support views on which names are subsumed to one of these categories (Larson and Segal 1995, Elbourne 2005), though it is also consistent with names forming their own species of definite.
What we might call proper nominals (proper names without their determiner) can modify other nouns, as in ‘a Bronx resident’. They can also occur as the restrictors of determiners other than ‘the’, as in ‘every Alfred’. Some (notably Burge —see the Description Theory below) take such non-argumental occurrences as consitituting their primary use (in a theoretical, rather than statistical sense). However, it might seem more natural, pre-theoretically, to regard such occurrences as on a par with “coerced” expressions such as the verb ‘googled’.
Is there just one proper name ‘Alice’ or are there many homonyms (‘Alice-1’, ‘Alice-2’, etc.)? On the one hand, it is tempting to infer the uniqueness of the name, on syntactic grounds, from the uniqueness of the proper noun (arguably the same noun recurs in the names ‘Alice Waters’ and ‘Alice Walker’, as well as in the phrase ‘two famous Alices’). On the other hand, there is pressure from semantics to recognize multiple homonyms (or else large-scale ambiguity). For instance, if the meaning of a name is its referent, then there is either one ambiguous name ‘Alice’, with as many meanings as there are individuals named Alice, or many univocal names with identical spelling (see Kripke 1980, 8 and especially Kaplan 1990 for the latter view). If instead the meaning of a name corresponds to a rule determining, or constraining, its reference in a context, then there is no pressure to adopt either expedient.
J. S. Mill is given credit (and naming rights) for the commonsense view that the semantic contribution of a name is its referent (and only its referent). For instance, the semantic value of the name ‘Aristotle’ is Aristotle himself (note that this assumes that, by ‘Aristotle’, a particular, as opposed to generic, name is intended—see Syntax above). It is unlikely that Mill was the first to hold this view (Mill's argument that a town could still with propriety be called ‘Dartmouth’ even though it didn't lie at the mouth of the Dart River engages with a dialectic as old as Plato's Cratylus), which underwent a revival in the second half of the twentieth century, beginning with Ruth Barcan Marcus 1961.
Frege's puzzle of ‘the Morning Star’ and ‘the Evening Star’ challenges the Millian conception of names (note that while Frege used ‘proper name’ [Eigenname] to cover singular terms generally, both expressions seem to be proper names of a sort—“star” names—see Syntax). For while both expressions have the same referent (the planet Venus), they do not seem equivalent in cognitive significance, nor do they contribute in the same way to the truth conditions of all sentences in which they occur. In particular, they cannot be substituted salva veritate (preserving truth) in the scope of propositional attitude verbs (this claim is subject to dispute—see Salmon 1986):
- Homer believed that the Morning Star was the Morning Star. (True)
- Homer believed that the Morning Star was the Evening Star. (False)
Russell (1911) required that a propositional attitude holder be acquainted with each of the components of the proposition in question. This presents a further problem for the Millian view, for it seems that one can believe the proposition expressed by the sentence ‘Aristotle was wise’ without personally being acquainted with Aristotle, suggesting that Aristotle is not himself contributed to that proposition.
Even if we don't find Russell's epistemological views persuasive, names without a referent (e.g. ‘Atlantis’) pose a problem for Millianism. For it is plausible that the sentence ‘Atlantis lies to the west of Gibraltar’ expresses a proposition (and one distinct from that expressed by ‘El Dorado lies to the west of Gibraltar’, for someone might believe the former without believing the latter) and yet on the Millian view ‘Atlantis’ does not contribute anything to the semantic content of the sentence (and hence nothing over and above what ‘El Dorado’ might contribute).
Millians have made responses to all three of these objections. For Frege's puzzle, see, to begin with, Crimmins and Perry 1983, Richard 1983, Salmon 1986, Soames 1987 and 1989. For the puzzle of empty names, see Braun 1993 and the essays in Everett and Hofweber 2000. Russell's conditions on singular thought are now generally viewed as overly stringent, and it is common to assume that we are in a position to entertain a proposition with Aristotle as a constituent (see, for instance, Kaplan 2012).
Frege's (1952) answer to his own puzzle was to add an additional tier, of Sinn or “sense,” to the referential semantic value of a name. While ‘the Morning Star’ and ‘the Evening Star’ have the same reference, or ground-floor semantic value, the expressions differ at the level of sense.
Frege left his notion of sense somewhat obscure. Subsequent theorists have discerned a theoretical role unifying several distinct functions (cf. Kripke 1980, 59; Burge 1977, 356). First, as just remarked, an expression has a sense (along with a Bedeutung or reference) as part of its semantic value. Its sense is its contribution to the thought (proposition) expressed by a sentence in which it occurs. Names, considered as generic syntactic types, most likely do not have senses as their lingustic meanings. However, any successful use of a generic name (or perhaps any “particular” name) will express a complete sense. Second, the sense of an expression determines its reference. Third, sense encapsulates the cognitive significance of an expression. In the last capacity, the sense of a sentence—a thought (proposition)—must obey Frege's intuitive criterion of difference (Evans 1982). Roughly, any two sentences that may simultaneously be held to have opposite truth-values by the same rational agent must express different thoughts.
Take ‘the Morning Star’ and ‘the Evening Star’. In addition to referring to Venus, each of these names has a sense. The sense in each case determines (perhaps with respect to some parameter) the referent Venus. Additionally, the senses encapsulate the cognitive significance of each expression. This implies that the senses of the two names are different, since the thought expressed by (3) is distinct from the thought expressed by (4) (from the intuitive criterion of difference, and the fact that someone might think (3) is true but (4) is false).
- The Morning Star is the Morning Star
- The Morning Star is the Evening Star
Neo-Fregeans have come up with a host of candidates for the role of sense. These candidates do not always satify all of Frege's requirements (though they usually satisfy at least one), making the Neo-Fregean camp somewhat heterogeneous. For example, Michael Devitt (1981) takes senses to be causal-historical chains linking utterances of names to their referents (see the Causal-Historical Theory below). For him, senses play a role in semantics (by constraining the notion of synonymy and the truth conditions of attitude reports) without encapsulating the cognitive significance of an expression for a group of speakers. John McDowell (1977) provides an account of sense that fills the cognitive and reference-determining roles Frege ascribed to it, without adopting a two-tiered semantic theory (that is to say, without reifying sense as a semantic value). He associates the sense of a name with an appropriately stated clause in a Tarskian truth theory (making it possible to state what one must know to have the sense but not what the sense itself consists in).
Perhaps the best known account (emerging from the work of Carnap and Church ) treats sense as intension. An intension is a function from possible worlds to extensions. For instance, the intension of ‘the number of planets’ is a function that, given a possible world w, returns a number—the number of planets at w. The extension of an expression at the actual world corresponds to its reference (in the case of ‘the number of planets’ this is 8); thus intension can be said to determine reference (relative to a world parameter). Moreover, if we take propositions to be functions from possible worlds to truth-values (i.e., intensions of sentences), then we can easily treat the intension of a noun phrase as its compositional contribution to the proposition expressed by a sentence. Finally, the intension of a definite description can be seen to correspond to its cognitive significance. The significance of a definite description ‘the F’ is presumably the information that allows one to discriminate possible worlds based (only) on who or what is (uniquely) F. The intension of a definite description partitions logical space (i.e., the set of all possible worlds) in precisely this manner.
We can cook up an intension for a name N by finding a (proper) definite description ‘the F’ true of the referent of the name at the actual world, and then setting the intension of N to the function that takes a world w and outputs the F at w. Indeed, for any intelligible intension for a name, there is a corresponding definite description. The view of sense as intension thus has many of the same features (and, as we will see, drawbacks) as the Description Theory of names.
The Description Theory of names (a.k.a. descriptivism) says that each name N has the semantic value of some definite description ‘the F’. For instance, ‘Aristotle’ might have the semantic value of ‘the teacher of Alexander the Great’. As stated, this view is consistent with Millianism (it would be Millianism if we assumed that the semantic value of a definite description was the individual it referred to), however in practice it is always coupled with a view on which definite descriptions have either a Russellian (see the section on Russell's theory in the entry on descriptions) or an intensional semantics (see the section on Sense Theories above). In the latter case, descriptivism corresponds to the intensional interpretation of Frege's sense theory. It should be noted that Montague's (1973) treatment of names as denoting higher-order functions (i.e., quantifier denotations) does not belong under this heading. It was adopted for reasons of type consilience, rather than descriptivist intuitions. Moreover, his first meaning postulate imposes an intensionally rigid (see the next section) interpretation on names.
Russell is the earliest proponent of the Description Theory (see Sainsbury 1993 for a dissenting view). He applied it to ordinary, but not what he called “logically proper” names (the latter were in fact demonstratives like ‘this’ and ‘that’, and he gave them a Millian semantics). In conjunction with his semantics for definite descriptions, he used the theory to solve the puzzles mentioned in the last section, without resorting to a two-tiered semantic theory (see Sense Theories above). ‘The Morning Star’ and ‘the Evening Star’ (as well as ‘Atlantis’ and ‘El Dorado’) might correspond in semantic value to different definite descriptions, and so would make different semantic contributions to the sentences in which they occur (semantic contribution must, as on the sense theory, be connected with cognitive significance, if Russellian descriptivism is to answer Frege's puzzle). Moreover, a thinker can often be acquainted (on Russell's view) with the property F in the semantic value of the corresponding description where they cannot be acquainted with the individual the name refers to.
Famous deeds descriptivism is exemplified by the interpretation of ‘Aristotle’ as ‘the teacher of Alexander the Great’. Note that the latter description also contains a proper name, which will in turn be interpreted as a definite description. The hope is that this description will not mention Aristotle, and indeed that ultimately every description will bottom out in irreducible predicates (or “logically proper” names) rather than entering a loop (which would mean that we have not specified, only constrained, the semantic values in question). Like many exercises of this sort, this translation has never been carried out.
Some obvious problems with famous deeds descriptivism have been recognized. The choice of ‘the teacher of Alexander the Great’ as the description synonymous with ‘Aristotle’ seems arbitrary; why not ‘the most famous pupil of Plato’? Not only that, but at birth Aristotle was so-named, but not yet known as, for he had not yet become, the teacher of Alexander the Great. Finally, the vast majority of names belong to people (or inanimate objects) that have never performed any deed worthy of note. Two improvements of famous deeds descriptivism have been suggested. One is cluster descriptivism, put forward by Searle (1958) and Strawson (1959; 180 ff.), which says that a name corresponds to a definite description whose nominal is a disjunction (or more complex collocation) of predicates like ‘teacher of Alexander’ and ‘most famous pupil of Plato’. Since this approach doesn't address the problem of the cognitive significance of ‘Aristotle’ soon after his birth, it is often combined with context-sensitive descriptivism. On this view the semantic value of a name, while it is always that of some description, differs from context to context (even when the name is being used to speak about the same individual). So Aristotle's mother might have used the name ‘Aristotle’ with a different semantic value (corresponding to a different (cluster-)description) to a present-day Aristotle scholar. Frege (1952, 1956) and Russell seem to have held the context-sensitive view. Wittgenstein is often cited as a proponent of the cluster view, but attention to the text (1953, section 79) reveals that he is advocating context-sensitivity.
Metalinguistic descriptivism says that a proper name N has the semantic value of the definite description ‘the individual called N’ (Russell 1956, Kneale 1962, Bach 1981, Geurts 1997, Fara to appear). This suggestion has the advantage that the name's descriptive content is known to all speakers of the language, but has the disadvantage that, in most cases, the description is not proper (for example, there is more than one individual called ‘Alice’). Furthermore, it may not provide a satisfactory answer to Frege's puzzle, as Frege himself denied that the cognitive significance of a sentence like (4) was metalinguistic.
Tyler Burge (1973) finds support for the metalinguistic view in non-argumental occurrences of names, which often take on a metalinguistic interpretation, as in (5) (though this interpretation is not inevitable, cf. (6)):
- There are relatively few Alfreds in Princeton.
- There are relatively few Picassos at the Met.
Burge overcomes the problem of the impropriety of the metalinguistic description by treating names used as arguments as complex demonstratives (with a context-sensitive analysis; check Burge 1973 for the details), so he is properly speaking a “demonstrativist.”
Metalinguistic descriptivism is commonly presented in a package with the view that proper nominals (see Syntax above) are metalinguistic predicates (so the nominal ‘Alfred’ is a predicate true of individuals that are so called). The latter view accounts for (5) on its metalinguistic interpretation, and provides a compositional analysis of proper names consisting of ‘the’ followed by a proper nominal as metalinguistic definite descriptions. An unvoiced definite article must be posited for proper names that do not carry one overtly, so that all proper names, under analysis, resemble those of Modern Greek (a less remarked-upon fact about Modern Greek is that the definite article also shows up between demonstrative and nominal in complex demonstratives).
Compositional metalinguistic analyses of names have recently been defended by linguists and philosophers (Matushansky 2008; Fara to appear). Nevertheless, considerable obstacles remain for such analyses and for the general treatment of proper nominals as metalinguistic predicates. Most straightforwardly, it is not strictly speaking true to say that the Raritan River is called Raritan (it is called the Raritan), or that the Bronx is named Bronx (it is named the Bronx – cf. Geurts 1999: 209, which discusses examples that work differently). Nor is a Bronx resident a resident called Bronx, but rather one who resides in the Bronx. Even worse difficulties crop up for views that attempt to analyze the meaning of a nominal like ‘George Washington Bridge’ as the intersection of the meanings of its component nouns (as Matushansky 2008: 603–4 does).
The metalinguistic interpretation of non-argumental occurrences of names does not, in the end, support metalinguistic descriptivism. As Fara herself points out, any word can take on a metalinguistic interpretation in the right context (she gives the example below in Fara to appear):
- I gave my cat the name ‘Hominid’ and you gave your dog the same name; between us we have two Hominids.
But if every word can also be used as a predicate with a metalinguistic interpretation, then the parsimonious approach would be to explain this as semantic polysemy or by a general coercion mechanism (much like the mechanism of deferred interpretation Fara discusses) that can derive a metalinguistic interpretation of any word used as a predicate. In the absence of such a mechanism, one would need to stipulate an additional metalinguistic interpretation for ‘hominid’ (along with every other word). However, with such a mechanism in place, it is no more parsimonious to begin with the predicate interpretation of names and derive the argumental interpretation compositionally, than it is to begin with the argumental interpretation and derive the predicate interpretation using the aforementioned mechanism.
There are three well-known arguments against description and sense theories of names (the latter on the interpretation of sense as intension).
Kripke's modal argument (1980, 48-9) contends that names and definite descriptions differ in their “modal profiles” (see also Marcus's proof of the necessity of identity statements involving names). Names are rigid designators which is to say that their intension is constant across metaphysically possible worlds (where defined). Definite descriptions like ‘the teacher of Alexander’, on the other hand, have non-constant intensions. Kripke backs up this taxonomy with intuitions about ‘might have’ modal sentences (taken in the “ontic” or “metaphysical” rather than the epistemic sense). For instance, while the first sentence sounds true on this reading, the second sounds false:
- Aristotle might not have been the teacher of Alexander
- Aristotle might not have been Aristotle
Kripke's epistemic argument (1980, 78; 87) is closely related, but trades on epistemic, rather than metaphysical, modality. His argument is not that names are rigid in epistemic contexts. That would be a hard sell, as (10) is true on its epistemic reading (Kripke 1980, 103-4):
- The Morning Star might have turned out not to have been the Evening Star
Instead, he argues that no definite description D has the same semantic value as the name ‘Aristotle' (say), because otherwise the sentence (10) would be analytic, and so knowable a priori. (Kripke argues [1980, 68ff] that even the sentence ‘Aristotle is the individual called “Aristotle”’—supplied by the metalinguistic description theory—is not knowable a priori.)
- Aristotle is D
The semantic argument, due to Donnellan (1972) and Kripke (1980, 80ff), and related to the externalist arguments of Putnam (1975) and Burge (1979), drives a wedge between Frege's two semantic levels of sense and reference (see Sense Theories above). Sense, for Frege, constitutes the semantic contribution of an expression to the thought (proposition) expressed (and hopefully communicated) on an occasion of utterance. It comprises the cognitive significance of a term for a group of communicating agents. However, the argument goes, a sense understood in this way might not determine the correct reference. For instance, in a certain group the cognitive significance carried by the name ‘Peano’ might be the same as that of the description ‘the discoverer of the Peano axioms’ (the assumption is that the members of the group believe no more and no less than this about Peano), yet as it turns out Dedekind, not Peano, discovered the (misnamed) axioms. The problem is that the intension of ‘the discoverer of the Peano axioms’ maps the actual world onto Dedekind, and so is a presentation of the referent Dedekind, rather than a presentation of the referent of ‘Peano’. Thus the “sense” that captures the cognitive significance of ‘Peano’ (in a certain group) does not also determine the reference of ‘Peano’.
Kripke (1980, 81) and Donnellan (1972, 343) also point out that the cognitive significance of a name for a group might not amount to an intension (a function from worlds to extensions) at all. Kripke gives the example of the name ‘Feynman’ to which the members of a certain group attach the indefinite description ‘a physicist’, which is insufficient, due to the popularity of the profession, to single anyone out, let alone Feynman himself. (The members of the group presumably also attach the metalinguistic information ‘is called “Feynman”’, but this will still be insufficient if there is more than one so-named physicist.) This point is relevant to the Description Theory, as it would appear that in this case the semantic/cognitive value of the name ‘Feynman’ corresponds to that of either an indefinite description (‘a physicist’) or else an improper definite description (‘the physicist’).
As Ben Caplan (2007) points out these same arguments apply to Millians who attempt to account for Fregean intuitions of cognitive significance and substitution failure by suggesting that uses of names in context additionally assert or suggest some descriptive content (for instance, Soames 2002 and Thau 2002).
Kripke and Donnellan (and, anticipating them, Peter Geach 1969) offer an externalist alternative to the theory that cognitive significance determines reference (see the entry on externalism about mental content). Donnellan argues that an “omniscient being who sees the whole history of the affair” (1972, 355) is in better shape to determine the referent of a particular name than one who limits themselves to the (possibly distorted or attenuated) descriptive content associated with the name by a group of agents. Kripke (1980, 91) suggests that the reference of a name is established by a dubbing ceremony (or “baptism”) at which the dubee is indicated by a demonstration or uniquely referring description. All uses of the name that derive from this source (uses deriving from the baptism itself, or acquired from someone who was present at the baptism, or from someone who acquired it from someone who was present at the baptism, etc.) refer to the original dubee, even if the speaker associates the name with a description that is untrue of that dubee.
Evans (1973) offers the case of ‘Madagascar’ as a counterexample to Kripke's externalist theory. That name originally referred to a portion of mainland Africa, but its reference subsequently shifted to the island off the coast, as a result of a miscommunication propagated by Marco Polo. Despite the fact that there is a continuous “chain” of derived uses of the name ‘Madagascar’ going back to the baptism of the mainland, the name as used now refers to an island.
Kripke includes the following caveat in his account of the reference-passing links in a causal-historical chain:
When the name is ‘passed from link to link’, the receiver of the name must, I think, intend when he learns it to use it with the same reference as the man from whom he heard it. If I hear the name ‘Napoleon’ and decide it would be a nice name for my pet aardvark, I do not satisfy this condition. (Kripke 1980: 96)
Kripke's condition distinguishes reference-passing from what we might call “vehicle-passing” or etymological relation. It is the latter that Leigh Fermor chronicles in the following passage:
The Roman imperial mantle on Greek shoulders has led to a splendid confusion; for the word ‘Rum’, on Oriental tongues, referred not only to the Christian Byzantines – they are so styled in the Koran – but, for a century or two, to their conquered territory in Asia Minor; it designated the empire of the Seldjuk Turks in Anatolia with its capital at Konia (Iconium), reigned over by the ‘Sultans of Rum’. To tangle matters still further the word Romania was often used in the West, especially during the crusades, to specify the parts of the Eastern Empire which lay in Europe; the Turks extended ‘Rum’ into ‘Rumeli’, (‘land of the Rumis‘) to cover the same area. One still finds the confusing word ‘Rumelia’ on old maps. (In Greece, Rumeli now specifically applies to the great mountainous stretch of continental Greece running from the Adriatic to the Aegean, north of the Gulf of Corinth and south of Epirus and Thessaly.) (Leigh Fermor 1966, 98)
When the Turks applied the word ‘Rum’ to their conquered territory, they were influenced in their choice by a previous use of the same word to refer to the Byzantine Empire, but they did not intend to use the word in exactly the same way. Though not as dramatic as calling one's pet aardvark ‘Napoleon‘, this is a case in which the intentional condition is not satisfied. It is conceivable that all true cases of a vehicle changing its reference are purposeful, and hence break the causal-historical chain by violating this condition.
Kripke himself admits (1980, 93; 97) that his rough account provides something less than an airtight theory. Even if the determinants of a name's reference are more complex than Kripke's simplified tale would allow, they do seem to remain in the purview of the “being who sees the whole history of the affair.” and do not correspond to the description summing up the cognitive significance of the name for its users.
Causal descriptivism (Loar 1976, Lewis 1984, Kroon 1987, Jackson 1998) considers a token of a name t to have the semantic value of the definite description ‘the individual dubbed in the ceremony connected by a causal-historical chain to t’. While this view illustrates the peculiar resilience of descriptivism, its detractors claim that such a description will not capture the cognitive significance of the name-token (especially among those unacquainted with the causal-historical account of reference).
On Russell's theory of definite descriptions, they are quantifiers, and as such can in principle take wide or narrow scope with respect to sentential operators. As Dummett points out (1973, 110-151), this means that all Kripke's modal argument shows is that names (considered as descriptions) obligatorily take wide scope with respect to metaphysical modal operators. A representation of (9) on which the names (each interpreted as the description ‘the teacher of Alexander’) take wide scope (giving a reading that is false) appears below:
- ∃x(∀z(teach-alex z ↔ z = x) ∧ ∃y(∀z(teach-alex z ↔ z = y) ∧ ◊ x ≠ y)))
There is a problem with this approach, however (see Soames 1998). Since names must sometimes take narrow scope in attitude contexts (in order to account for Fregean intuitions), a sentence where the name appears (on the surface) below an attitude verb which is itself in the scope of a metaphysical modal would place conflicting requirements on the putative scope of a name:
- Homer might have believed that the Morning Star is the Evening Star
On the favoured reading (on which the proposition Homer might have believed is non-trivial), (13) provides a counterexample to the proposed rule that names take scope over metaphysical modals (there might, of course, be a more complex rule in play that can explain both judgments—viz. that names take scope over metaphysical modals unless doing so will cause them to scope over an attitude verb).
Kripke's modal argument claims that names differ from definite descriptions in that they are rigid designators. However, certain definite descriptions designate rigidly too. For instance, the extension of ‘the even prime’ is 2 at every possible world. A popular descriptivist response to the modal argument is then to semantically equate names with rigid definite descriptions. Indeed, we can inoculate existing description theories against the modal argument by the process of rigidification. Two ways to rigidify a non-rigid definite description have been explored. One is to prefix it with a term-forming dthat operator (Kaplan 1978). Another is to apply the “actually” operator to the nominal restrictor of the description. These procedures are described below.
The “actually” operator in modal logic is supposed to mirror the behavior of the English adverb ‘actually’ and adjective ‘actual’ in the examples below:
- I thought your yacht was longer than it actually is.
- The actual teacher of Alexander might not have taught Alexander.
Models for the interpretation of modal logic consist of a pointed set (i.e., a set with a designated member) of possible worlds, an accessibility relation, and a valuation function. The designated world, also known as the actual world, is required to define truth in such structures (as opposed to global truth or satisfiability). Actualist modal logics include an operator @ (the “actually” operator) that shifts the point of evaluation of the formula in its scope back to the actual world. It follows that while the intension of the description ‘the number of planets’ picks out a different number at certain different possible worlds, the intension of ‘the @(number of planets)’ picks out 8 at every world (making it rigid). This is because the predicate ‘@(number of planets)’ is true, at a world w, of whatever the predicate ‘number of planets’ is true of at the actual world. In general, adding the ‘actually’ operator takes a possibly nonrigid definite description ‘the F’ and turns it into a description ‘the @(F)’ whose intension rigidly picks out the object that is F in the actual world.
Rigidification comes at a cost. Prior to rigidification, we could distinguish the descriptivist intensions of the names ‘the Morning Star’ and ‘the Evening Star’. However, once they are rigidified (for instance to ‘the @(heavenly body seen in the morning)’ and ‘the @(heavenly body seen in the evening)’) their intensions coincide (as the constant function that picks out Venus relative to every possible world). Thus we can no longer distinguish their cognitive values on an intensionalist sense theory. Those who adopt actualist rigidification therefore tend to be Russellians, and equate the semantic/cognitive value of a name not with its intension, but with a structured function (the contribution a Russellian definite description makes to a structured proposition). With this technology, theorists can distinguish the values of the two descriptions above, since they are structures with different components (cf. Carnap's notion of intensional isomorphism).
Explaining rigidification by the dthat term-forming operator requires some further setup, provided in the following section.
Before moving on, note that certain non-rigid definite descriptions, such as ‘the man in the corner’, pattern with names rather than with the “role-type” definite descriptions in examples like (8) (the terminology, and its exegesis, can be found in Rothschild 2007). In other words, the modal argument does not distinguish ordinary names from a broad class of “particularized” definite descriptions. Moreover, certain special names, such as ‘Miss America’, behave in much the same way as role-type descriptions.
On Kaplan's (1989) semantics of indexical expressions (see the relevant section of the entry on indexicals), the character of an expression is distinguished from its content relative to a context. Indexicals, like the pronoun ‘I’, receive a different content (they are used to refer to different individuals) in different contexts. Kaplan nevertheless thought something remained constant in different contexts—the linguistic meaning of ‘I’. He proposed to identify this meaning with a function from contexts to contents (what he called the “character” of ‘I’). In the case of the first-person pronoun, this function maps a context c onto the speaker in c. The character of ‘I’ thus corresponds to the linguistic rule that ‘I’ picks out the speaker of the context.
As Kaplan (1989, 529ff) emphasizes, character has affinities with Frege's notion of sense. It corresponds to a level of linguistic meaning distinct from reference. It also captures (at a certain level) the cognitive significance of an indexical expression for those competent in the language. As with Frege's notion, there is a connection between character and definite descriptions. Kaplan (1978) introduced a term-forming expression, dthat, which combines with a definite description. The character of this complex term corresponds to the intension of the embedded definite description (i.e., the function that maps any world w onto the unique object that satisfies the descriptive content in w). That is to say, the character of the term ‘dthat(the F)’ is the function that maps any context c onto the constant function from worlds to the object that is F in c. For example, ‘dthat(the speaker)’ (simplifying somewhat) has the same character as ‘I’. Adding the operator thus rigidifies the description by projecting its descriptive content onto the level of character.
A name, considered as a generic syntactic type (see Syntax above), refers to different individuals depending on the context. We might therefore treat the generic name ‘Alice’ as having a character equivalent to ‘dthat(the individual called “Alice”)’. The cognitive significance ascribed to a generic name, on this account (corresponding to its character), is the same as that ascribed by metalinguistic descriptivists (on their account, corresponding to its intension). Indeed, this account is simply a rigidified version of metalinguistic descriptivism (for a view close to this one, see Pelczar and Raisbury 1998).
On the other hand, if names are individuated, as Kripke and Kaplan would have it, by naming ceremony (see Syntax), a different view of the character of names applies. Just as the reference of ‘I’ depends on who is speaking, the reference of ‘Alice-1’ (a particular name of the generic form ‘Alice’) depends on the individual dubbed in an earlier naming ceremony. We might treat the character of ‘Alice-1’ as follows (Haas-Spohn 1995):
- The individual dubbed in the ceremony that is the source (in c) of ‘Alice-1’
Once again, this is a rigidified version of an existing description theory, viz. causal descriptivism. Other versions of descriptivism that we have seen so far could also be rigidified using the dthat operator. The context-sensitive theory, on which the nominal predicate is itself provided by context is an interesting (and extremely powerful) variant.
A general problem that the “character-descriptivist” accounts above must face is Frege's puzzle of the failure of coreferring names to intersubstitute salva veritate in propositional attitude contexts. Propositional attitude reports containing names that differ in character but not content, on the standard semantics provided by Kaplan (1989), will themselves not differ in content (and thus truth-value). In order to parlay a difference in character into a difference in truth-value, propositional attitude verbs would need to operate on character rather than content. This approach is defended in Recanati 2000 and Schlenker 2003. Two-dimensional semantics (Jackson 1998, Chalmers 2004) instead identifies two different kinds of intension, one of which is closely related to character (it is a function from contexts or epistemically possible states to extensions) and serves as the object of attitudes.
Kaplan himself rules out contextual variation (i.e., a non-constant character) in names. As he writes (1989, 563):
Those who suggest that proper names are merely one species of indexical depreciate the power and the mystery of the causal chain theory.
According to him, Kripke's theory of how names refer is “pre-semantic.” Unlike the character of ‘I’, which captures its linguistic meaning, the suggested non-constant character for ‘Alice-1’ would encapsulate a pre-semantic fact, one that doesn't belong in the language-user's repertoire.
While the causal-historical theory implies that the reference of a name is determined by facts about the context, this context-dependence should not necessarily be encoded in its character (linguistic meaning). The covariation of reference with alternative facts about the context could, however, correspond to our imperfect knowledge of the settled (pre-semantic) facts that determine meaning. That is to say, while the character-like function from contexts to individuals in (14) is not (on this view) the linguistic meaning of ‘Alice-1’, it might nevertheless correspond to its cognitive significance for language-users with a less than perfect grasp of the pre-semantic facts. This means that the function will still fulfill one of the key roles of sense. Stalnaker (1978) pursues a solution to Frege's puzzle along these lines.
Kripkeans and Fregeans alike assume that a name determines a (perhaps partial) function from worlds to entities (for Kripke this function is constant). Millianism and the intensional interpretation of Frege's view both entail this thesis (“functionalism”) about names. Russellian descriptivism also entails it, for a Russellian description (if proper) has a unique witness at certain worlds (including the actual one). However, the thesis is not true of all uses of names, as we can see from the following case, due to Josh Dever (1998) (see also Cumming 2008). Suppose Sherlock Holmes gives an interim account of a case that begins as follows:
- The murder was committed by two individuals, call them X and Y. First note that, since there is no sign of a struggle, both X and Y were known to the victim.
‘X’ and ‘Y’ are names (or at least seem to be). It is possible, as recognized by Kripke (1980, 91), to introduce a name in the course of a conversation. Kripke only considered names that were introduced using a definite description (for instance, Evans' example of the introduction of ‘Julius’ to refer to the inventor of the zip), and so had a determinate reference (and intension). In (18), ‘X’ and ‘Y’ are interchangeable names for the pair of murderers. If the murder was in fact committed by Louise and Auguste Lumiere, then we might propose that the conjoined noun phrase ‘X and Y’ refers to them. However, there seems to be no consideration in favour of treating ‘X’ as referring to Louise rather than Auguste or vice versa.
In fact, Dever's case is more complex than it needs to be. A simple case of a name introduced by an indefinite noun phrase makes the same point.
- A man, call him ‘Ernest’, was walking in the park at 3pm today. Ernest sat down on this bench. (cf. Geurts 1999: 204)
Some would argue that the use of ‘Ernest’ in the second sentence in (18) is referential, referring to the individual the speaker of (18) had in mind. However, it is possible that one who utters (18) has no-one in mind (consider Holmes concluding (18) on the basis of statistical patterns of pedestrian traffic in the park). It is also plausible that (18) is true even if the speaker is wrong about the person they had in mind, so long as there was another man who acted in the manner described. On such an understanding of (18), the occurence of ‘Ernest’ is interpreted, not referentially, but as an existentially bound variable. It follows that there is no function from worlds to individuals that corresponds to this use of the name (if there were, it would single out a determinate individual at the actual world, and we have just seen that this leads to the wrong interpretation). Note that appeal to Kaplan's notion of character is not availing in this instance, since character is itself functional. If the character determines a functional content (an intension) at a context c, then it will determine a referent at c (the result of applying the intension to the world coordinate of c).
One view of names that is able to account for examples like (18) treats them as anaphoric expressions, similar to pronouns. The anaphoric view is compatible with descriptivism (for instance, Geurts is a proponent of both views), so long as definite descriptions are understood as anaphors (as they are in Heim 1982). An early proponent of the anaphoric view is Fred Sommers. Sommers thinks of Kripke's dubbing ceremony as “an act that introduces a special duty pronoun” (1980, 230). Burge, who is, as remarked above, a demonstrativist, represents the semantic value of a demonstrative (and so an argumental occurrence of a name) with a variable. When this variable remains free, it is given a value by the speaker's act of demonstration (modeled by a variable assignment). However, Burge also (1973, 436) anticipates cases in which the variable is bound by a preceding quantifier (including an example like (18), in which the quantifier is existential). In such cases, the demonstrative (or argumental name) is interpreted as “a pronominal place marker” (1973, 436)—i.e., an anaphor.
- Anderson, J., 2006, The Grammar of Names, Oxford: Oxford University Press.
- Bach, K., 1981, “What's in a name?”, Australasian Journal of Philosophy, 59: 371–86.
- Braun, D., 1993, “Empty Names”, Noûs, 27(4): 449–69.
- Burge, T., 1973, “Reference and proper names”, Journal of Philosophy, 70(14): 425–39.
- –––, 1977, “Belief De Re”, The Journal of Philosophy, 74: 338–62.
- –––, 1979, “Individualism and the Mental”, in P. French, T. Euhling and H. Wettstein, eds., Midwest Studies in Philosophy (Volume 4), Minneapolis: University of Minnesota Press, pp. 73–121.
- Caplan, B., 2007, “Millian Descriptivism”, Philosophical Studies, 33: 181–198.
- Carnap, R., 1947, Meaning and Necessity, Chicago: University of Chicago Press.
- Carroll, J., 1985, What's in a Name?, New York: Freeman and Company.
- Chalmers, D., 2004, “The Foundations of Two-Dimensional Semantics”, in M. Garcia-Carpintero and J. Macia, eds., Two-Dimensional Semantics: Foundations and Applications, Oxford: Oxford University Press.
- Church, A., 1951, “A Formulation of the Logic of Sense and Denotation”, in P. Henle, M. Kallen, and S. K. Langer, eds., Structure, Method, and Meaning, New York: Liberal Arts Press.
- Crimmins, M., and J. Perry, 1989, “The Prince and the Phonebooth”, Journal of Philosophy, 86(12): 685–711.
- Cumming, S., 2008, “Variabilism”, Philosophical Review, 117(4): 525–554.
- Dever, Josh, 1998, Variables, Ph.D. Dissertation, Philosophy Department, University of California/Berkeley. [available online.]
- Devitt, M., 1981, Designation, New York: Columbia University Press.
- Donnellan, K., 1972, “Proper Names and Identifying Descriptions”, in D. Davidson and G. Harman, eds., Semantics of Natural Language, Dordrecht: D. Reidel, pp. 356–79.
- Dummett, M., 1973, Frege: Philosophy of Language, Cambridge, MA: Harvard University Press.
- Elbourne, P., 2005, Situations and Individuals, Cambridge, MA: MIT Press.
- Evans, G., 1973, “A Causal Theory of Names”, Proceedings of the Aristotelian Society (Supplementary Volume), 47: 187–208.
- –––, 1982, The Varieties of Reference, Oxford: Blackwell.
- Everett, A., and T. Hofweber, eds., 2000, Empty Names, Fiction and the Puzzles of Non-Existence, Stanford : CSLI Publications.
- Fara, D. G., 2014, “‘Literal’ Uses of Proper Names”, in A. Bianchi, ed., On Reference, Oxford: Oxford University Press.
- Frege, G., 1952, “On Sense and Reference”, in P. Geach and M. Black, eds., Translations from the Philosophical Writings of Gottlob Frege, Oxford: Blackwell, pp. 56–79.
- –––, 1956, “The Thought: A Logical Inquiry”, Mind, 65(259): 289–311.
- Geach, P., 1969, “The Perils of Pauline”, Revue of Metaphysics, 23(2): 287–300.
- Geurts, B., 1997, “Good News about the Description Theory of Names”, Journal of Semantics, 14: 319–48.
- Geurts, B., 1999, Presuppositions and Pronouns, Amsterdam: Elsevier.
- Haas-Spohn, U., 1995, Versteckte Indexikalität und subjektive Bedeutung, Berlin: Akademie Verlag.
- Heim, I., 1982, On the Semantics of Definite and Indefinite Noun Phrases, Ph.D. thesis, Linguistics Department, University of Massachusetts at Amherst.
- Jackson, F., 1998, From Metaphysics to Ethics: a Defence of Conceptual Analysis, Oxford: Oxford University Press.
- Kaplan, D., 1978, “Dthat”, in P. Cole, ed., Syntax and Semantics (Volume 9: Pragmatics), New York: Academic Press, pp. 221–43.
- –––, 1989, “Demonstratives/Afterthoughts”, in J. Almog, J. Perry and H. Wettstein, eds., Themes from Kaplan, Oxford: Oxford University Press, pp. 481–614.
- –––, 1990, “Words”, Proceedings of the Aristotelian Society (Supplementary Volume), 64: 93–119.
- –––, 2012, “An Idea of Donnellan”, in J. Almog, P. Leonardi, eds., Having in Mind: the Philosophy of Keith Donnellan, New York: Oxford University Press, pp. 122–175.
- Kneale, W., 1962, “Modality De Dicto and De Re”, in E. Nagel, P. Suppes and A. Tarski, eds., Logic, Methodology and Philosophy of Science, Proceedings of the 1960 International Congress, Stanford: Stanford University Press, pp. 622-33.
- Kripke, S., 1980, Naming and Necessity, Cambridge, MA: Harvard University Press.
- Kroon, F., 1987, “Causal Descriptivism”, Australasian Journal of Philosophy, 65: 1–17.
- Larson, R. and G. Segal, 1995, Knowledge of Meaning, Cambridge, MA: MIT Press.
- Leigh Fermor, P., 1966, Roumeli, New York: Parker & Row.
- Lewis, D., 1984, “Putnam's Paradox”, Australasian Journal of Philosophy, 62: 221–36.
- Loar, B., 1976, “The Semantics of Singular Terms”, Philosophical Studies, 30: 353–77.
- McDowell, J., 1977, “On the Sense and Reference of a Proper Name”, Mind, 86: 159–85.
- Marcus, R. B., 1947, “The Identity of Individuals in a Strict Functional Calculus of Second Order”, Journal of Symbolic Logic, 12(1): 12–15.
- –––, 1961, “Modalities and Intensional Languages”, Synthese, 13(4): 303–322.
- Matushansky, O., 2008, “On the Linguistic Complexity of Proper Names”, Linguistics and Philosophy, 21: 573–627.
- Montague, R., 1973, “The Proper Treatment of Quantification in Ordinary English”, in J. Hintikka, J. Moravcsik and P. Suppes, eds., Approaches to Natural Language, Dordrecht: D. Reidel, pp. 221-42.
- Mill, J., 1973, “A System of Logic, Ratiocinative and Inductive”, in J. Robson, ed., The Collected Works of J. S. Mill (Volumes 7–8), Toronto: University of Toronto Press.
- Payne, J., and R. Huddleston, 2002, “Nouns and Noun Phrases”, in G. Pullum, and R. Huddleston, eds., The Cambridge Grammar of the English Language, Cambridge: Cambridge University Press.
- Pelczar, M. and J. Rainsbury, 1998, “The Indexical Character of Names”, Synthese 114: 293–317.
- Putnam, H., 1975, “The Meaning of ‘Meaning’”, in K. Gunderson, ed., Language, Mind, and Knowledge, Minnesota Studies in the Philosophy of Science (Volume 7), Minneapolis, Minnesota: University of Minnesota Press, pp. 131–93.
- Recanati, F., 2000, Oratio Obliqua, Oratio Recta: an Essay on Metarepresentation, Cambridge, MA: MIT Press.
- Richard, M., 1983, “Direct Reference and Ascriptions of Belief”, Journal of Philosophical Logic, 12: 425–452.
- Rothschild, D., 2007, “Presuppositions and Scope”, Journal of Philosophy, 104(2): 71–106.
- Russell, B., 1911, “Knowledge by Acquaintance and Knowledge by Description”, Proceedings of the Aristotelian Society, 11: 108–28.
- Russell, B., 1956, “The Philosophy of Logical Atomism”, in R. Marsh, ed., Logic and Knowledge, New York: Capricorn, pp. 177–281.
- Sainsbury, M., 1993, “Russell on Names and Communication”, in A. Irvine and G. Wedeking, eds., Russell and Analytic Philosophy, Toronto: University of Toronto Press, pp. 3–21.
- Salmon, N., 1986, Frege's Puzzle, Cambridge, MA: MIT Press.
- Schlenker, P., 2003, “A Plea for Monsters”, Linguistics and Philosophy, 26: 29–120.
- Searle, J., 1958, “Proper Names”, Mind, 67(266): 166–73.
- Soames, S., 1987, “Substitutivity”, in J. Thomson, ed., On Being and Saying: Essays in Honor of Richard Cartwright, Cambridge: MIT Press, pp. 99–132.
- –––, 1989, “Direct Reference and Propositional Attitudes”, in J. Almog, J. Perry and H. Wettstein, eds., Themes from Kaplan, Oxford: Oxford University Press, pp. 481–614.
- –––, 1998, “The Modal Argument: Wide Scope and Rigidified Descriptions”, Noûs, 32(1): 1–22.
- –––, 2002, Beyond Rigidity: The Unfinished Semantic Agenda of Naming and Necessity, New York, NY: Oxford University Press.
- Stalnaker, R., 1978, “Assertion”, in P. Cole, ed., Syntax and Semantics (Volume 9: Pragmatics), New York: Academic Press, 315–32.
- Strawson, P., 1959, Individuals: an Essay on Descriptive Metaphysics, London: Methuen.
- Thau, M., 2002, Consciousness and Cognition, Oxford: Oxford University Press.
- Wittgenstein, L., 1953, Philosophical Investigations, translated by G. Anscombe, Oxford: Basil Blackwell.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
[Please contact the author with suggestions.]
descriptions | Frege, Gottlob | indexicals | logic: intensional | mental content: externalism about | nonexistent objects | Plato: Cratylus | propositional attitude reports | reference | rigid designators | Russell, Bertrand | semantics: two-dimensional | singular terms: medieval theories of | http://plato.stanford.edu/entries/names/ | 13 |
21 | Summary: We consider characters and strings, two of the important primitive data types in many languages. Characters are the building blocks of strings. Strings, conversely, are combinations of characters. Both characters and strings are useful for input and output.
A character is a small, repeatable unit within some system of writing -- a letter or a punctuation mark, if the system is alphabetic, or an ideogram in a writing system like Han (Chinese). Characters are usually put together in sequences that computer scientists call strings.
Although early computer programs focused primarily on numeric processing, as computation advanced, it grew to incorporate a variety of algorithms that incorporated characters and strings. Some of the more interesting algorithms we will consider involve these data types. Hence, we must learn how to use this building blocks.
As you might expect, Scheme needs a way to distinguish between many different but similar things, including: characters (the units of writing), strings (formed by combining characters), symbols (which look like strings, but are treated as atomic and also cannot be combined or separated), and variables (names of values). Similarly, Scheme needs to distinguish between numbers (which you can compute with) and digit characters (which you can put in strings).
In Scheme, a name for any of the text characters can be formed by
#\ before that character. For instance,
#\a denotes the lower-case a. Of
course, lower-case a should be distinguished from
the upper-case A character, (denoted by
from the symbol that you obtain with
from the string
"a", and from the name
#\3 denotes the character 3 (to be
distinguished from the number 3) and the expression
denotes the question mark (to be distinguished from a symbol and a
name that look quite similar).
In addition, some characters are named by pound, backslash, and a
longer name. In particular, the expression
#\space denotes the space character, and
denotes the newline character (the one that is used to terminate lines
of text files stored on Unix and Linux systems).
In any implementation of Scheme, it is assumed that the available
characters can be arranged in sequential order (the “collating
sequence” for the character set), and that each character
is associated with an integer that specifies its position in
that sequence. In ASCII, the numbers
that are associated with characters run from 0 to 127; in Unicode, they lie within the range from 0 to
65535. (Fortunately, Unicode includes all of the ASCII characters and
associates with each one the same collating-sequence number that ASCII
uses.) Applying the built-in
procedure to a character gives you the collating-sequence
number for that character; applying the converse procedure,
integer->char, to an integer in the appropriate
range gives you the character that has that collating-sequence number.
The importance of the collating-sequence numbers is that they
extend the notion of alphabetical order to all the characters.
Scheme provides five built-in predicates for comparing characters
char>?). They all work by determining which
of the two characters comes first in the collating sequence (that is,
which one has the lower collating-sequence number).
The Scheme specification requires that if you compare two capital
letters to each other or two lower-case letters to each other, you'll
get standard alphabetical order:
(char<? #\A #\Z)
must be true, for instance. If you compare a capital letter with a
lower-case letter, though, the result depends on the design of the
character set. In ASCII, every capital letter (even
precedes every lower-case letter (even
if you compare two digit characters, the specification guarantees that
the results will be consistent with numerical order:
#\1, which precedes
#\2, and so on.
But if you compare a digit with a letter, or anything with a punctuation
mark, the results depend on the character set.
Because there are many applications in which it is helpful to ignore the
distinction between a capital letter and its lower-case equivalent in
comparisons, Scheme also provides case-insensitive
versions of the comparison procedures:
These procedures essentially convert all letters to the same case
before comparing them.
There are also two procedures for converting case,
If its argument is a lower-case letter,
returns the corresponding capital letter; otherwise, it returns
the argument unchanged. If its argument is a capital letter,
char-downcase returns the corresponding lower-case letter;
otherwise, it returns the argument unchanged.
Scheme provides several one-argument predicates that apply to characters:
char-alphabetic?determines whether its argument is a letter (
#\Z, in English).
char-numeric?determines whether its argument is a digit character (
#\9in our standard base-ten numbering system).
char-whitespace?determines whether its argument is a “whitespace character”, one that is conventionally stored in a text file primarily to position text legibly. In ASCII, the whitespace characters are the space character and four specific control characters: <Control/I> (tab), <Control/J> (line feed), <Control/L> (form feed), and <Control/M> (carriage return). On most systems,
#\newlineis a whitespace character. On our Linux systems,
#\newlineis the same as <Control/J> and so counts as a whitespace character.
char-upper-case?determines whether its argument is a capital letter.
char-lower-case?determines whether its argument is a lower-case letter.
It may seem that it's easy to implement some of these operations. For
example, you might want to implement
char-alphabetic? using a strategy
something like the following.
A character is alphabetic if it is between
However, that implementation is not necessarily correct for all versions of Scheme: Since the Scheme specification does not guarantee that the letters are collated without gaps, it's possible that this algorithm treats some non-letters as letters. The alternative, comparing to each valid letter in turn, seems inefficient. By making this procedure built-in, the designers of Scheme have encouraged programmers to rely on a correct (and, presumably, efficient) implementation.
Note that all of these predicates assume that their parameter is a character. Hence, if you don't know the type of a parameter, you will need to first ensure that it is a character. For example,
(char-lower-case? 23)char-lower-case?: expects argument of type <character>; given 23 Interactions:1:0: (char-lower-case? (quote 23))
(and (char? 23) (char-lower-case? 23))
(define lower-case-char? (lambda (x) (and (char? x) (char-lower-case? x))))
As you've just learned, characters provide the basic building blocks of the things we might call “texts”. What do we do with characters? We combine them into strings.
A string is a sequence of zero or more characters. Most strings
can be named by enclosing the characters they contain between
plain double quotation marks, to produce a string
literal. For instance,
"periwinkle" is the
nine-character string consisting of the characters
#\e, in that order. Similarly,
is the zero-character string (the null string
or the empty string).
String literals may contain spaces and newline characters; when such
characters are between double quotation marks, they are treated like any
other characters in the string. There is a slight problem when one wants
to put a double quotation mark into a string literal: To indicate that the
double quotation mark is part of the string (rather than marking the end of
the string), one must place a backslash character immediately in front of
it. For instance,
"Say \"hi\"" is the eight-character
string consisting of the characters
#\", in that order. The backslash
before a double quotation mark in a string literal is an
character, present only to indicate that the character immediately
following it is part of the string.
This use of the backslash character causes yet another slight problem:
What if one wants to put a backslash into a string? The solution is similar:
Place another backslash character immediately in front of it. For instance,
"a\\b" is the three-character string consisting of the
that order. The first backslash in the string literal is an escape, and
the second is the character that it protects, the one that is part of the
Scheme provides several basic procedures for working with strings:
predicate determines whether its argument is or is not a string.
( procedure constructs and returns
a string that consists of
count repetitions of
a single character. Its first argument indicates how long the string
should be, and the second argument specifies which character it should
be made of. For instance, the following code constructs and returns
(make-string 5 #\a)
( procedure takes any number of
characters as arguments and constructs and returns a string consisting
of exactly those characters. For instance,
(string #\H #\i
#\!) constructs and returns the string
This procedure can be useful for building strings with quotes.
(string #\" #\") produces
(Isn't that ugly?)
( procedure converts a string into a
list of characters. The
( procedure converts
a list of characters into a string. It is invalid to call
list->string on a non-list or on a list that
contains values other than characters.
(#\H #\e #\l #\l #\o)
(list->string (list #\a #\b #\c))
(list->string (list 'a 'b))list->string: expects argument of type <list of character>; given (a b) Interactions:1:0: (list->string (list (quote a) (quote b)))
( procedure takes any string
as argument and returns the number of characters in that string.
For instance, the value of
is 7 and the value of
(string-length "a\\b") is 3.
( procedure is used to
select the character at a specified position within a string.
zero-based indexing; the position is specified
by the number of characters that precede it in the string. (So the
initial character in the string is at position 0, the next at position
1, and so on.) For instance, the value of
#\p -- the character that follows four other
characters and so is at position 4 in zero-based indexing.
Strings can be compared for “lexicographic order”, the
extension of alphabetical order that is derived from the collating
sequence of the local character set. Once more, Scheme provides both
case-sensitive and case-insensitive versions of these predicates:
string>? are the case-sensitive
string-ci>? the case-insensitive ones.
procedure takes three arguments. The first is a string and the second
and third are non-negative integers not exceeding the length of that
substring procedure returns the
part of its first argument that starts after the number of characters
specified by the second argument and ends after the number of characters
specified by the third argument. For instance:
"hypocycloid" 3 8) returns the substring
-- the substring that starts after the initial
and ends after the eighth character, the
l. (If you
think of the characters in a string as being numbered starting
substring takes the characters from
end - 1.)
procedure takes any number of
strings as arguments and returns a string formed by concatenating those
(string-append "al" "fal" "fa")
( procedure takes any Scheme number
as its argument and returns a string that denotes the number.
( procedure provides the inverse
operation. Given a string that represents a number, it returns the
corresponding number. On some implementations of Scheme, when you
string->number an inappropriate input, it
returns the value
#f (which represents “no”
or “false”). You are then responsible for checking the
(string->number "3 + 4i")
When a character is stored in a computer, it must be represented as a
sequence of bits (“binary digits”,
that is, zeroes and ones). However, the choice of a particular bit
sequence to represent a particular character is more or less arbitrary.
In the early days of computing, each equipment manufacturer developed one
or more “character codes” of its own, so that, for example,
the capital letter A was represented by the sequence
on an IBM 1401 computer, by
000001 on a Control Data 6600, by
11000001 on an IBM 360, and so on. This made it troublesome
to transfer character data from one computer to another, since it was
necessary to convert each character from the source machine's encoding
to the target machine's encoding. The difficulty was compounded by the
fact that different manufacturers supported different characters; all
provided the twenty-six capital letters used in writing English and the
ten digits used in writing Arabic numerals, but there was much variation
in the selection of mathematical symbols, punctuation marks, etc.
In 1963, a number of manufacturers agreed to use the American Standard Code for Information Interchange (ASCII), which is currently the most common and widely used character code. It includes representations for ninety-four characters selected from American and Western European text, commercial, and technical scripts: the twenty-six English letters in both upper and lower case, the ten digits, and a miscellaneous selection of punctuation marks, mathematical symbols, commercial symbols, and diacritical marks. (These ninety-four characters are the ones that can be generated by using the forty-seven lighter-colored keys in the typewriter-like part of a MathLAN workstation's keyboard, with or without the simultaneous use of the <Shift> key.) ASCII also reserves a bit sequence for a “space” character, and thirty-three bit sequences for so-called control characters, which have various implementation-dependent effects on printing and display devices -- the “newline” character that drops the cursor or printing head to the next line, the “bell” or “alert” character that causes the workstation to beep briefly, and such.
In ASCII, each character or control character is represented by a sequence of exactly seven bits, and every sequence of seven bits represents a different character or control character. There are therefore 27 (that is, 128) ASCII characters altogether.
Over the last quarter-century, non-English-speaking computer users have grown increasingly impatient with the fact that ASCII does not provide many of the characters that are essential in writing other languages. A more recently devised character code, the Unicode Worldwide Character Standard, currently defines bit sequences for 49194 characters for the Arabic, Armenian, Bengali, Bopomofo, Canadian Syllabics, Cherokee, Cyrillic, Devanagari, Ethiopic, Georgian, Greek, Gujarati, Gurmukhi, Han, Hangul, Hebrew, Hiragana, Kannada, Katakana, Khmer, Latin, Lao, Malayalam, Mongolian, Myanmar, Ogham, Oriya, Runic, Sinhala, Tamil, Telugu, Thaana, Thai, Tibetan, and Yi writing systems, as well as a large number of miscellaneous numerical, mathematical, musical, astronomical, religious, technical, and printers' symbols, components of diagrams, and geometric shapes.
Unicode uses a sequence of sixteen bits for each character, allowing for 216 (that is, 65536) codes altogether. Many bit sequences are still unassigned and may, in future versions of Unicode, be allocated for some of the numerous writing systems that are not yet supported. The designers have completed work on the Deseret, Etruscan, and Gothic writing systems, although they have not yet been added to the Unicode standard. Characters for the Shavian, Linear B, Cypriot, Tagalog, Hanunoo, Buhid, Tagbanwa, Cham, Tai, Glagolitic, Coptic, Buginese, Old Hungarian Runic, Phoenician, Avenstan, Tifinagh, Javanese, Rong, Egyptian Hieroglyphic, Meroitic, Old Persian Cuneiform, Ugaritic Cuneiform, Tengwar, Cirth, tlhIngan Hol (that is, “Klingon”; can you tell that CS folks are geeks, even CS folks who work on international standards?), Brahmi, Old Permic, Sinaitic, South Arabian, Pollard, Blissymbolics, and Soyombo writing systems are under consideration or in preparation.
Although some local Scheme implementations use and presuppose the ASCII character set, the Scheme language does not require this, and Scheme programmers should try to write their programs in such a way that they could easily be adapted for use with other character sets (particularly Unicode). In fact, MediaScript supports Unicode.
#\a(lowercase a) ...
#\A(uppercase A) ...
Janet Davis ([email protected]) | http://www.cs.grinnell.edu/~davisjan/csc/151/2012S/mediascheme/readings/strings-reading.html | 13 |
16 | Switches, Routers, Bridges and LANs/Routers/Basics of Routing
What Is Routing?
Routing is the process of moving information across an internetwork from source to destination. At least one intermediate node must be encountered along the way. Routing and bridging look similar but the primary difference between the two is that bridging occurs at Layer 2 (the link layer) of the OSI reference model, whereas routing occurs at Layer 3 (network layer). One important difference between routing and bridging is that the layer 3 addresses are allocated hierarchically, so it is possible for a router to have a single rule allowing it to route to an entire address range of thousands or millions of addresses. This is an important advantage in dealing with the scale of the internet, where hosts are too numerous (and are added and removed too quickly) for any router to know about all hosts on the internet.
The role of routing the information in network layer is performed by routers. Routers are the heart of the network layer. Now first we look the architecture of the router, processing of datagram in routers and then we will learn about routing algorithms.
The Architecture of a router
A router will include the following components:
Input port performs several functions. The physical layer function is performed by the line termination unit. Protocol decapsulation is performed by data link processing. Input port also performs lookup and forwarding function so that packets are forwarded into the switching fabric of the router emerges at the appropriate output port. Control packets like packets carrying routing protocol information for RIP, OSPF etc. are forwarded to routing processor. Input port also performs input queuing when output line is busy.
Output port forwards the packets coming from switching fabric to the corresponding output line. It performs exact reverse physical and data link functionality than the input port. Output port also performs queuing of packets which comes from switching fabric.
Routing processor executes routing protocols. It maintains routing information and forwarding table. It also performs network management functions within the router.
The job of moving the packets to particular ports is performed by switching fabrics. Switching can be accomplished in number of ways:
- Switching via Memory: The simplest, easiest routers, with switching between output and input ports being done under direct control of CPU (router processor). Whenever a packet arrives at input port routing processor will come to know about it via interrupt. It then copies the incoming packets from input buffer to processor memory. Processor then extracts the destination address look up from appropriate forwarding table and copies the packet to output port’s buffer. In modern routers the lookup for destination address and the storing (switching) of the packet into the appropriate memory location is performed by processors input line cards.
- Switching via Bus: Input port transfers packet directly to the output port over a shared bus, without intervention by the routing processor. As the bus is shared only one packet is transferred at a time over the bus. If the bus is busy the incoming packets has to wait in queue. Bandwidth of router is limited by shared bus as every packet must cross the single bus. Examples: Bus switching CISCO-1900, 3-COM’s care builder5.
- Switching via Interconnection Networks: To overcome the bandwidth problem of a shared bus cross bar switching networks is used. In cross-bar switching networks input and output ports are connected by horizontal and vertical buses. If we have N input ports and N output ports it requires 2N buses to connect them. To transfer a packet from the input port to corresponding output port, the packet travels along the horizontal bus until it intersects with vertical bus which leads to destination port. If vertical is free the packet is transferred. But if vertical bus is busy because of some other input line must be transferring packets to same destination port. The packets are blocked and queued in same input port.
Processing the IP datagram
The incoming packets to the input port are stored in queue to wait for processing. As the processing begins, the IP header is processed first. The error checksum is performed to identify the errors in transmission. If it does not contain error then the destination IP address field is check. If it is for the local host then taking into account the protocol UDP, TCP or ICMP etc. the data field is passed to the corresponding module.
If the destination IP address is not for local host then it will check for the destination IP address in its routing table. Routing table consist of the address of next router to which the packet should be forwarded. Then the output operation are performed on the outgoing packet such as its TTL field must be decrease by one and checksum bits are again calculated and the packet is forwarded to the output port which leads to the corresponding destination. If the output port is busy then packet has to wait in output queue.
Packet scheduler at the output port must choose the packet from the queue for transmission. The selection of packet may be on the basis of First-come-first-serve (FCFS) or priority or waited fair queuing (WFQ), which shares the outgoing link “fairly” among the different end-to-end connections that have packets queued for transmission. For quality-of-service packet scheduling plays very crucial role. If the incoming datagram contains the routing information then the packet is send to the routing protocol which will modify the routing table entry accordingly.
Now we will take into consideration different routing algorithms. There are two types of protocol for transferring information form source to destination.
Routed vs Routing Protocol
Routed protocols are used to direct user traffic such as IP or IPX between routers. Routed packet contains enough information to enable router to direct the traffic. Routed protocol defines the fields and how to use those fields.
Routed protocols include:
- Internet Protocol
- Remote Procedure Call (RPC)
- Novell IPX
- Open Standards Institute networking protocol
- Banyan Vines
- Xerox Network System (XNS)
Routing protocol allow routers to gather and share the routing information to maintain and update routing table which in turn used to find the most efficient route to destination.
Routing protocol includes:
- Routing Information Protocol (RIP and RIP II)
- Open Shortest Path First (OSPF)
- Intermediate System to Intermediate System (IS-IS)
- Interior Gateway Routing Protocol (IGRP)
- Cisco's Enhanced Interior Gateway Routing Protocol (EIGRP)
- Border Gateway Protocol (BGP)
Design Goals of Routing Algorithms
Routing algorithms have one or more of the following design goals:
- Optimality: This is the capability of the routing protocol to search and get the best route. Routing metrics are used for finding best router. The number of hops or delay can be used to find the best path. Paths with fewer hops or paths having least delay should be preferred as the best route.
- Simplicity and low overhead: Routing algorithms also are designed to be as simple as possible. The routing algorithm must offer its functionality efficiently, with a minimum of software overhead. Efficiency is particularly important when the software implementing the routing algorithm must run on a computer with limited physical resources, or work with large volumes of routes.
- Robustness and stability: Routing protocol should perform correctly in the face of unusual or unforeseen circumstances, such as hardware failures, high load conditions, and incorrect implementations. This property of routing protocols is known as robustness. The best routing algorithms are often those that have withstood the test of time and that have proven stable under a variety of network conditions.
- Rapid convergence: Routing algorithms must converge rapidly. Convergence is the process of agreement, by all routers, on optimal routes. In a network when a network event causes routes to either go down or become available or cost of link changes, routers distribute routing update messages which causes the other network routers to recalculate optimal routes and eventually cause other routers in networks to agree on these routes.
- Flexibility: Routing algorithms should also be flexible. They should quickly and accurately adapt to a variety of network circumstances.
Classification of routing algorithms
Routing algorithms are mainly are of two types
- Static routing: In static routing algorithms the route followed by the packet always remains the same. Static routing algorithm is used when routes change very slowly. In this network administrator computes the routing table in advance, the path a packet takes between two destinations is always known precisely, and can be controlled exactly.
- Predictability: Because the network administrator computes the routing table in advance, the path a packet takes between two destinations is always known precisely, and can be controlled exactly.
- No overhead on routers or network links: In static routing there is no need for all the routers to send a periodic update containing reachability information, so the overhead on routers or network links is low.
- Simplicity: Configuration for small networks is easy.
- Lack of scalability: Computing the static routing for small number of hosts and routers is easy. But for larger networks finding static routes becomes cumbersome and may lead to errors.
- If a network segment moves or is added: To implement the change, you would have to update the configuration for every router on the network. If you miss one, in the best case, segments attached to that router will be unable to reach the moved or added segment. In the worst case, you'll create a routing loop that affects many routers
- It cannot adapt failure in a network: If a link fails on a network using static routing, then even if an alternative link is available that could serve as a backup, the routers won't know to use it.
- Dynamic routing: Machines can communicate to each other trough a routing protocol and build the routing table. The router then forwards the packets to the next hop, which is nearer to the destination. With dynamic routing, routes change more quickly. Periodic updates are send on the network, so that if there is change in link cost then all the routers on the network come to know it and will change there routing table accordingly.
- Scalability and adaptability: A dynamically routed network can grow more quickly and grow larger without becoming unmanageable. It is able to adapt to changes in the network topology brought about by this growth.
- Adaptation to failures in a network: In dynamic routing routers learn about the network topology by communicating with other routers. Each router announces its presence. It also announces the routes it has available to the other routers on the network. Because of this if you add a new router, or add an additional segment to an existing router, the other routers will hear about the addition and adjust their routing tables accordingly.
- Increase in complexity: In dynamic routing it has to send periodic updates about the communicating information about the topology. The router has to decide exactly what information it must send. When the router comes to know about the network information from other routers it is very difficult to correctly adapt to changes in the network and it must prepare to remove old or unusable routes, which adds to more complexity.
- Overhead on the lines and routers: Routers periodically send communication information in packets to the other router about the topology of the network. These packets does not contain user information but only the information necessary for the routers so it is nothing but the extra overhead on the lines and routers.
Classification of Dynamic Routing Protocols
The first classification is based on where a protocol is intended to be used: between your network and another's network, or within your network: this is the distinction between interior and exterior. The second classification has to do with the kind of information the protocol carries and the way each router makes its decision about how to fill in its routing table which is link-state vs. distance vector.
Link State Routing
In a link-state protocol, a router provides information about the topology of the network in its immediate vicinity and does not provide information about destinations it knows how to reach. This information consists of a list of the network segments, or links, to which it is attached, and the state of those links (functioning or not functioning). This information is then broadcasted throughout the network. Every router can build its own picture of the current state of all of the links in the network because of the information broadcast throughout the network. As every router sees the same information, all of these pictures should be the same. From this picture, each router computes its best path to all destinations, and populates its routing table with this information. Now we will see the link state algorithm known as Dijkstra’s algorithm.
The notation and there meanings are as follows:
- Denotes set of all nodes in graph.
- is the link cost from node to node which are in . If both nodes are not directly connected, then . The most general form of the algorithm doesn't require that , but for simplicity we assumed that they are equal.
- is the node executing the algorithm to find the shortest path to all the other nodes.
- denotes the set of nodes incorporated so far by the algorithm to find the shortest path to all the other nodes in .
- cost of the path from the source node to destination node .
Definition of the Algorithm
In practice each router maintins two lists, known as Tentative and Confirmed. Each of these lists contains a set of entries of the form (Destination, Cost, Nexthop).
The algorithm works as follows:
- Initialize the Confirmed list with an entry for myself; this entry has a cost of 0.
- For the node just added to the Confirmed list in the previous step, call it node Next, select its LSP.
- For each neighbor (Neighbor) of Next, calculate the cost (Cost) to reach this Neighbor as the sum of the cost from myself to Next and from to Neighbor.
- If Neighbor is currently on neither the Confirmed not the Tentative list, then add (Neighbor, Cost, NextHop) to the Tentative list, where NextHop is the direction I go to reach Next.
- If Neighbor is currently on the Tentative list, and the Cost is less than the currently listed cost for Neighbor, then replace the current entry with (Neighbor , Cost, NextHop), where NextHop is the direction I go to reach Next.
- If the Tentative list is empty, stop. Otherwise, pick the entry from the Tentative
list with the lowest cost, move it to the Confirmed list, and return to step 2.
[algorithm from Computer Networks a system approach – Peterson and Davie.]
Now lets look at example : Consider the Network depicted below.
Steps for building routing table for A is as follows:
|1||( A,0,-)||Since A is the only new member of the confirmed list, look at its LSP.|
|2||(A,0,-)||(B,9,B) (C,3,C) (D,8,D)||A’s LSP says we can reach B through B at cost 9, which is better than anything else on either list, similarly for C and D.|
|3||(A,0,-) (C,3,C)||(B,9,B) (D,8,D)||Put lowest-cost member of Tentative (C) onto Confirmed list. Next, examine LSP of newly confirmed member (C)|
|4||(A,0,-) (C,3,C)||(D,4,C)||Cost to reach E through C is 4, so replace (B, infinity,-).|
|5||(A,0,-) (C,3,C) (D,4,C)||(B,6,E) (D,6,E)||Cost to reach B through E is 6, so replace (B, 9, B).|
|6||(A,0,-) (C,3,C) (D,4,C) (B,6,E)||The only node remains is D perform the steps 2 to 6 again and we will get distance of D from A through E is 6 by following algorithm. So next iteration add (D,6,E)|
|7||(A,0,-) (C,3,C) (D,4,C) (B,6,E) (D,6,E)||We are done. Now shortest path to all the destinations are know.|
Distance vector routing
Distance vector algorithm is iterative, asynchronous, and distributed. In distance vector each node receives some information from one or more of its directly attached neighbors. It then performs some calculation and may then distribute the result of calculation to its neighbors. Hence it is distributive. It is inteactive because this process of exchanging information continues until no more information is exchanged between the neighbors.
Let be the cost of the least-cost path from node to node y. The least cost are related by Bellman-Ford equation:
where min v in the equation is taken over all of x’s neighbors. After traveling from x to v , then we take the shortest path from v to y, the shortest path from x to y will be C(x, V) + dv(y). As we begin to travel to some neighbor v, the least cost from x to y is minimum of C(x, V) + dv(y) taken over all neighbours v.
In distance vector algorithm each node x maintains routing data. It should maintain :
- The cost of each link attached to neighbors i.e. for attach node v it should know C(x,v).
- It also maintains its routing table which is nothing but the x’s estimate of its cost to all destinations, y, in N.
- It also maintains distance vectors of each of its neighbors. i.e. Dv = [Dv(y): y in N]
In distributed , asynchronous algorithm each node sends a copy of distance vector time to time from each of the neighbors. When a node x receives a its neighbors distance vector then it saves it and update its distance vector as:
when node update its distance vector then it will send it to its neighbors. The neighbor performs the same actions this process continues until there is no information to send for each node.
Distance vector algorithm [ from kurose] is as follows :
At each node , x :
Lets consider the example: the network topology is given as
Now we will look at the steps for building the router table for R8 after step 1: after step 2: after step 3: and it is the solution. For node R8 now the routing table contains.
|Destination||Next hop||Cost to|
“In the network bad news travels slowly”. Consider R1, R2, R3 and R4 are the four routers connected in a following way.
The routing information of the routers to go from them to router R4 is R1 R2 R3 3, R2 2, R3 1, R4 Suppose R4 is failed. Then as there is no direct path between R3 and R4 it makes its distance to infinity. But in next data exchange R3 recognize that R2 has a path to R4 with hop 2 so it will update its entry from infinity to 2 + 1 = 3 i.e. (3,R3). In the second data exchange R2 come to know that both R1 and R2 goes to R4 with a distance of 3 so it updates its entry for R4 as 3 + 1 = 4 i.e (4, R3). In the third the data exchange the router R1 will change its entry to 4 + 1 = 5 ie ( 5, R2). This process will continue to increase the distance. The summary to this is given in following table.
|0||3, R2||2, R3||1, R4|
|1||3, R2||2, R3||3, R3|
|2||3, R2||4, R3||3, R3|
|3||5, R2||4, R3||5, R3|
|.. Count to infinity ...|
Solutions of count-to-infinity problem:
- Defining the maximum count
- For example, define the maximum count as 16 in RFC 1058 . This means that if all else fails the counting to infinity stops at the 16th iteration.
- Split Horizon
- Use of Split Horizon.. Split Horizon means that if node A has learned a route to node C through node B, then node A does not send the distance vector of C to node B during a routing update.
- Poisoned Reverse
- Poisoned Reverse is an additional technique to be used with Split Horizon. Split Horizon is modified so that instead of not sending the routing data, the node sends the data but puts the cost of the link to infinity. Split horizon with poisoned reserve prevents routing loops involving only two routers. For loops involving more routers on the same link, split horizon with poisoned reverse will not suffice. | http://en.m.wikibooks.org/wiki/Switches,_Routers,_Bridges_and_LANs/Routers/Basics_of_Routing | 13 |
27 | In grammar, a clause is the smallest grammatical unit that can express a complete proposition. A typical clause consists of a subject and a predicate, where the predicate is typically a verb phrase – a verb together with any objects and other modifiers. However the subject is sometimes not expressed; this is often the case in null-subject languages, if the subject is retrievable from context, but it also occurs in certain cases in other languages such as English (as in imperative sentences and non-finite clauses).
A simple sentence usually consists of a single finite clause with a finite verb that is independent. More complex sentences may contain multiple clauses. Main clauses (= matrix clauses, independent clauses) are those that could stand as a sentence by themselves. Subordinate clauses (= embedded clauses, dependent clauses) are those that would be awkward or nonsensical if used alone.
Two major distinctions
A primary division for the discussion of clauses is the distinction between main clauses (= matrix clauses, independent clauses) and subordinate clauses (= embedded clauses, dependent clauses). A main clause can stand alone, i.e. it can constitute a complete sentence by itself. A subordinate clause (= embedded clause), in contrast, is reliant on the appearance of a main clause; it depends on the main clause and is therefore a dependent clause, whereas the main clause is an independent clause.
A second major distinction concerns the difference between finite and non-finite clauses. A finite clause contains a structurally central finite verb, whereas the structurally central word of a non-finite clause is often a non-finite verb. Traditional grammar focuses on finite clauses, the awareness of non-finite clauses having arisen much later in connection with the modern study of syntax. The discussion here also focuses on finite clauses, although some aspects of non-finite clauses are considered further below.
Clauses according to a distinctive syntactic trait
Clauses can be classified according to a distinctive trait that is a prominent characteristic of their syntactic form. The position of the finite verb is one major trait used for classification, and the appearance of a specific type of focusing word (e.g. wh-word) is another. These two criteria overlap to an extent, which means that often no single aspect of syntactic form is always decisive in determining how the clause functions. There are, however, strong tendencies.
Standard SV-clauses (subject-verb) are the norm in English. They are usually declarative (as opposed to exclamative, imperative, or interrogative); they express information in a neutral manner, e.g.
- The pig has not yet been fed. - Declarative clause, standard SV order
- I've been hungry for two hours. - Declarative clause, standard SV order
- ...that I've been hungry for two hours. - Declarative clause, standard SV order, but functioning as a subordinate clause due to the appearance of the subordinator that
Declarative clauses like these are by far the most frequently occurring type of clause in any language. They can be viewed as basic, other clause types being derived from them. Standard SV-clauses can also be interrogative or exclamative, however, given the appropriate intonation contour and/or the appearance of a question word, e.g.
- a. The pig has not yet been fed? - Rising intonation on fed makes the clause a yes/no-question.
- b. The pig has not yet been fed! - Spoken forcefully, this clause is exclamative.
- c. You've been hungry for how long? - Appearance of interrogative word how and rising intonation make the clause a constituent question
Examples like these demonstrate that how a clause functions cannot be known based entirely on a single distinctive syntactic criterion. SV-clauses are usually declarative, but intonation and/or the appearance of a question word can render them interrogative or exclamative.
Verb first clauses
Verb first clauses in English usually play one of three roles: 1. They express a yes/no-question via subject-auxiliary inversion, 2. they express a condition as an embedded clause, or they express a command via imperative mood, e.g.
- a. He must stop laughing. - Standard declarative SV-clause (verb second order)
- b. Should he stop laughing? - Yes/no-question expressed by verb first order
- c. Had he stopped laughing,... - Condition expressed by verb first order
- d. Stop laughing! - Imperative formed with verb first order
- a. They have done the job. - Standard declarative SV-clause (verb second order)
- b. Have they done the job? - Yes/no-question expressed by verb first order
- c. Had they done the job,... - Condition expressed by verb first order
- d. Do the job! - Imperative formed with verb first order
Most verb first clauses are main clauses. Verb first conditional clauses, however, must be classifed as embedded clauses because they cannot stand alone.
Wh-clauses contain a wh-word. Wh-words often serve to help express a constituent question. They are also prevalent, though, as relative pronouns, in which case they serve to introduce a relative clause and are not part of a question. The wh-word focuses a particular constituent and most of the time, it appears in clause-initial position. The following examples illustrate standard interrogative wh-clauses. The b-sentences are direct questions (main clauses), and the c-sentences contain the corresponding indirect questions (embedded clauses):
- a. Sam likes the meat. - Standard declarative SV-clause
- b. Who likes the meat? - Matrix interrogative wh-clause focusing on the subject
- c. They asked who likes the meat. - Embedded interrogative wh-clause focusing on the subject
- a. Larry sent Susan to the store. - Standard declarative SV-clause
- b. Who did Larry send to the store? - Matrix interrogative wh-clause focusing on the object, subject-auxiliary inversion present
- c. We know who Larry sent to the store. - Embedded wh-clause focusing on the object, subject-auxiliary inversion absent
- a. Larry sent Susan to the store. - Standard declarative SV-clause
- b. Where did Larry send Susan? - Matrix interrogative wh-clause focusing on the oblique object, subject-auxiliary inversion present
- c. Someone is wondering where Larry sent Susan. - Embedded wh-clause focusing on the oblique object, subject-auxiliary inversion absent
One important aspect of matrix wh-clauses is that subject-auxiliary inversion is obligatory when something other than the subject is focused. When it is the subject (or something embedded in the subject) that is focused, however, subject-auxiliary inversion does not occur.
- a. Who called you? - Subject focused, no subject-auxiliary inversion
- b. Who did you call? - Object focused, subject-auxiliary inversion occurs
Another important aspect of wh-clauses concerns the absence of subject-auxiliary inversion in embedded clauses, as illustrated in the c-examples just produced. Subject-auxiliary inversion is obligatory in matrix clauses when something other than the subject is focused, but it never occurs in embedded clauses regardless of the constituent that is focused. A systematic distinction in word order emerges across matrix wh-clauses, which can have VS order, and embedded wh-clauses, which always maintain SV order, e.g.
- a. Why are they doing that? - Subject-auxiliary inversion results in VS order in matrix wh-clause.
- b. They told us why they are doing that. - Subject-auxiliary inversion is absent in embedded wh-clause.
- c. *They told us why are they doing that. - Subject-auxiliary inversion is blocked in embedded wh-clause.
- a. Who is he trying to avoid? - Subject-auxiliary inversion results in VS order in matrix wh-clause.
- b. We know who he is trying to avoid. - Subject-auxiliary inversion is absent in embedded wh-clause.
- c. *We know who is he trying to avoid. - Subject-auxiliary inversion is blocked in embedded wh-clause.
Relative clauses are a mixed group. In English they can be standard SV-clauses if they are introduced by that or lack a relative pronoun entirely, or they can be wh-clauses if they are introduced by a wh-word that serves as a relative pronoun.
- a. Something happened twice. - Standard declarative SV-clause
- b. something that happened twice - Relative clause introduced by the relative pronoun that and modifying the indefinite pronoun something
- a. I know everyone. - Standard declarative SV-clause
- b. everyone I know - Relative clause lacking a relative pronoun entirely and modifying the indefinite pronoun everyone
- a. They left early - Standard declarative clause
- b. the time when they left early - Relative clause introduced by the relative proform when and modifying the noun time
- a. The woman sang a song. - Standard declarative SV-clause
- b. the woman who sang a song. - Relative clause introduced by the relative pronoun who and modifying the noun woman
Being embedded clauses, relative clauses in English cannot display subject-auxiliary inversion.
A particular type of wh-relative-clause is the so-called free relative clause. Free relatives typically function as arguments, e.g.
- a. What he did was unexpected. - Free relative clause functioning as subject argument
- b. He will flatter whoever is present. - Free relative clause functioning as object argument
These relative clauses are "free" because they can appear in a variety of syntactic positions; they are not limited to appearing as modifiers of nominals. The suffix -ever is often employed to render a standard relative pronoun as a pronoun that can introduce a free relative clause.
Clauses according to semantic predicate-argument function
Embedded clauses can be categorized according to their syntactic function in terms of predicate-argument structures. They can function as arguments, as adjuncts, or as predicative expressions. That is, embedded clauses can be an argument of a predicate, an adjunct on a predicate, or (part of) the predicate itself. The predicate in question is usually the matrix predicate of a main clause, but embedding of predicates is also frequent.
A clause that functions as the argument of a given predicate is known as an argument clause. Argument clauses can appear as subjects, as objects, and as obliques. They can also modify a noun predicate, in which case they are known as content clauses.
- That they actually helped was really appreciated. - SV-clause functioning as the subject argument
- They mentioned that they had actually helped. - SV-clause functioning as the object argument
- What he said was ridiculous. - Wh-clause functioning as the subject argument
- We know what he said. - Wh-clause functioning as an object argument
- He talked about what he had said. - Wh-clause functioning as an oblique object argument
The following examples illustrate argument clauses that provide the content of a noun. Such argument clauses are content clauses:
- a. the claim that he was was going to change it - Argument clause that provides the content of a noun (= content clause)
- b. the claim that he expressed - Adjunct clause (relative clause) that modifies a noun
- a. the idea that we should alter the law - Argument clause that provides the content of a noun (= content clause)
- b. the idea that came up - Adjunct clause (relative clause) that modifies a noun
The content clauses like these in the a-sentences are arguments. Relative clauses introduced by the relative pronoun that as in the b-clauses here have an outward appearance that is closely similar to that of content clauses. The relative clauses are adjuncts, however, not arguments.
Adjunct clauses are embedded clauses that modify an entire predicate-argument structure. All clause types (SV-, verb first, wh-) can function as adjuncts, although the stereotypical adjunct clause is SV and introduced by a subordinator (= subordinate conjunction, e.g. after, because, before, when, etc.), e.g.
- a. Fred arrived before you did. - Adjunct clause modifying matrix clause
- b. After Fred arrived, the party started. - Adjunct clause modifying matrix clause
- c. Susan skipped the meal because she is fasting. - Adjunct clause modifying matrix clause
These adjunct clauses modify the entire matrix clause. Thus before you did in the first example modifies the matrix clause Fred arrived. Adjunct clauses can also modify a nominal predicate. The typical instance of this type of adjunct is a relative clause, e.g.
- a. We like the music that you brought. - Relative clause functioning as an adjunct that modifies the noun music
- b. The people who brought music were singing loudly. - Relative clause functioning as an adjunct that modifies the noun people
- c. They are waiting for some food that will not come. - Relative clause functioning as an adjunct that modifies the noun people
An embedded clause can also function as a predicative expression. That is, it can form (part of) the predicate of a greater clause.
- a. That was when they laughed. - Predicative SV-clause, i.e. a clause that functions as (part of) the main predicate
- b. He became what he always wanted to be. - Predicative wh-clause, i.e. wh-clause that functions as (part of) the main predicate
These predicative clauses are functioning just like other predicative expressions, e.g. predicative adjectives (That was good) and predicative nominals (That was the truth). They form the matrix predicate together with the copula.
Some of the distinctions presented above are represented in syntax trees. These trees make the difference between main and subordinate clauses very clear, and they also illustrate well the difference between argument and adjunct clauses. The following dependency grammar trees show that embedded clauses are dependent on an element in the main clause, often on a verb:
The main clause encompasses the entire tree each time, whereas the embedded clause is contained within the main clause. These two embedded clauses are arguments. The embedded wh-clause what he wanted is the object argument of the predicate know. The embedded clause that he is gaining is the subject argument of the predicate is motivating. Both of these argument clauses are directly dependent on the main verb of the matrix clause. The following trees identify adjunct clauses using an arrow dependency edge:
These two embedded clauses are adjunct clauses because they provide circumstantial information that modifies a superordinate expression. The first is a dependent of the main verb of the matrix clause and the second is a dependent of the object noun. The arrow dependency edges identify them as adjuncts. The arrow points away from the adjunct towards it governor to indicate that semantic selection is running counter to the direction of the syntactic dependency; the adjunct is selecting its governor. The next four trees illustrate the distinction mentioned above between matrix wh-clauses and embeddedwh-clauses
The embedded wh-clause is an object argument each time. The position of the wh-word across the matrix clauses (a-trees) and the embedded clauses (b-trees) captures the difference in word order. Matrix wh-clauses have V2 word order, whereas embedded wh-clauses have (what amounts to) V3 word order. In the matrix clauses, the wh-word is a dependent of the finite verb, whereas it is the head over the finite verb in the embedded wh-clauses.
Clauses vs. phrases
There has been confusion about the distinction between clauses and phrases. This confusion is due in part to how these concepts are employed in the phrase structure grammars of the chomskyan tradition. In the 1970s, chomskyan grammars began labeling many clauses as CPs (= complementizer phrases) or as IPs (= inflection phrases), and then later as TPs (= tense phrases), etc. The choice of labels was influenced by the theory-internal desire to use the labels consistently. The X-bar schema acknowledged at least three projection levels for every lexical head: a minimal projection (e.g. N, V, P, etc.), an intermediate projection (e.g. N', V', P', etc.), and a phrase level projection (e.g. NP, VP, PP, etc.). Extending this convention to the clausal categories occurred in the interest of the consistent use of labels.
This use of labels should not, however, be confused with the actual status of the syntactic units to which the labels are attached. A more traditional understanding of clauses and phrases maintains that phrases are not clauses, and clauses are not phrases. There is a progression in the size and status of syntactic units: words < phrases < clauses. The characteristic trait of clauses, i.e. the presence of a subject and a (finite) verb, is absent from phrases. Clauses can be, however, embedded inside phrases.
The central word of a non-finite clause is usually a non-finite verb (as opposed to a finite verb). There are various types of non-finite clauses that can be acknowledged based in part on the type of non-finite verb at hand. Gerunds are widely acknowledged to constitute non-finite clauses, and some modern grammars also judge many to-infinitives to be the structural locus of non-finite clauses. Finally, some modern grammars also acknowledge so-called small clauses, which often lack a verb altogether. It should be apparent that non-finite clauses are (by and large) embedded clauses.
The underlined words in the following examples are judged to be non-finite clauses, e.g.
- a. Bill stopping the project was a big disappointment. - Non-finite gerund clause
- b. Bill's stopping the project was a big disappointment. - Gerund with noun status
- a. We've heard about Susan attempting a solution. - Non-finite gerund clause
- b. We've heard about Susan's attempting a solution. - Gerund with noun status
- a. They mentioned him cheating on the test. - Non-finite gerund clause
- b. They mentioned his cheating on the test. - Gerund with noun status
Each of the gerunds in the a-sentences (stopping, attempting, and cheating) constitutes a non-finite clause. The subject-predicate relationship that has long been taken as the defining trait of clauses is fully present in the a-sentences. The fact that the b-sentences are also acceptable illustrates the enigmatic behavior of gerunds. They seem to straddle two syntactic categories: they can function as non-finite verbs or as nouns. When they function as nouns as in the b-sentences, it is debatable whether they constitute clauses, since nouns are not generally taken to be constitutive of clauses.
Some modern theories of syntax take many to-infinitives to be constitutive of non-finite clauses. This stance is supported by the clear predicate status of many to-infinitives. It is challenged, however, by the fact that to-infinitives do not take an overt subject, e.g.
- a. She refuses to consider the issue.
- a. He attempted to explain his concerns.
The to-infinitives to consider and to explain clearly qualify as predicates (because they can be negated). They do not, however, take overt subjects. The subjects she and he are dependents of the matrix verbs refuses and attempted, respectively, not of the to-infinitives. Data like these are often addressed in terms of control. The matrix predicates refuses and attempted are control verbs; they control the embedded predicates consider and explain, which means they determine which of their arguments serves as the subject argument of the embedded predicate. Some theories of syntax posit the null subject PRO (=pronoun) to help address the facts of control constructions, e.g.
- b. She refuses PRO to consider the issue.
- b. He attempted PRO to explain his concerns.
With the presence of PRO as a null subject, to-infinitives can be construed as complete clauses, since both subject and predicate are present.
One must keep in mind, though, that PRO-theory is particular to one tradition in the study of syntax and grammar (Government and Binding Theory, Minimalist Program). Other theories of syntax and grammar (e.g. Head-Driven Phrase Structure Grammar, Construction Grammar, dependency grammar) reject the presence of null elements such as PRO, which means they are likely to reject the stance that to-infinitives constitute clauses.
Another type of construction that some schools of syntax and grammar view as non-finite clauses is the so-called small clause. A typical small clause consists of a noun phrase and a predicative expression, e.g.
- We consider that a joke. - Small clause with the predicative noun phrase a joke
- Something made him angry. - Small clause with the predicative adjective angry
- She wants us to stay. - Small clause with the predicative non-finite to-infinitive to stay
The subject-predicate relationship is clearly present in the underlined strings. The expression on the right is a predication over the noun phrase immediately to its left. While the subject-predicate relationship is indisputably present, the underlined strings do not behave as single constituents, a fact that undermines their status as clauses. Hence one can debate whether the underlined strings in these examples should qualify as clauses. The layered structures of the chomskyan tradition are again likely to view the underlined strings as clauses, whereas the schools of syntax that posit flatter structures are likely to reject clause status for them.
- Adverbial clause
- Dependent clause
- Relative clause
- Sentence (linguistics)
- Thematic equative
- Balancing and deranking
- For this basic definition in terms of a proposition, see Kroeger (2005:32).
- For a definition of the clause that emphasizes the subject-predicate relationship, see Radford (2004327f.).
- Most basic discussions of the clause emphasize the distinction between main and subordinate clauses. See for instance Crystal (1997:62).
- Numerous dependency grammar trees like the ones produced here can be found, for instance, in Osborne and Groß (2012).
- For an example of a grammar that acknowledges non-finite to-infinitive clauses, see Radford (2004:23).
- For the basic characteristics of small clauses, see Crystal (1997:62).
- Crystal, D. 1997. A dictionary of linguistics and phonetics, fourth edition. Oxford, UK: Blackwell Publishers.
- Kroeger, P. 2005. Analysing Grammar: An Introduction. Cambridge, UK: Cambridge University Press.
- Osborne, T. and T. Groß 2012. Constructions are catenae: Construction Grammar meets Dependency Grammar. Cognitive Linguistics 23, 1, 163-214.
- Radford, A. 2004. English syntax: An introduction. Cambridge, UK: Cambridge University Press. | http://en.wikipedia.org/wiki/Clauses | 13 |
59 | ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (November 2010)|
Inductive reasoning, also known as induction or informally "bottom-up" logic, is a kind of reasoning that constructs or evaluates general propositions that are derived from specific examples. Inductive reasoning contrasts with deductive reasoning, in which specific examples are derived from general propositions.
The philosophical definition of inductive reasoning is much more nuanced than simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from generalizations to individual instances. Inductive reasoning consists of inferring general principles or rules from specific facts. A well-known laboratory example of inductive reasoning works like a guessing game. The participants are shown cards that contain figures differing in several ways, such as shape, number, and color. On each trial, they are given two cards and asked to choose the one that represents a particular concept. After they choose a card, the researcher says "right" or "wrong."
Though many dictionaries define inductive reasoning as reasoning that derives general principles from specific observations, this usage is outdated.
Inductive reasoning is probabilistic; it only states that, given the premises, the conclusion is probable.
A statistical syllogism is an example of inductive reasoning:
- Almost all people are taller than 26 inches
- Gareth is a person
- Therefore, Gareth is almost certainly taller than 26 inches
As a stronger example:
- 100% of biological life forms that we know of depend on liquid water to exist.
- Therefore, if we discover a new biological life form it will probably depend on liquid water to exist.
This argument could have been made every time a new biological life form was found, and would have been correct every time; this does not mean it is impossible that in the future a biological life form that does not require water could be discovered.
As a result, the argument may be stated less formally as:
- All biological life forms that we know of depend on liquid water to exist.
- All biological life depends on liquid water to exist.
Inductive vs. deductive reasoning
Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true. Instead of being valid or invalid, inductive arguments are either strong or weak, which describes how probable it is that the conclusion is true.
A classical example of an incorrect inductive argument was presented by John Vickers:
- All of the swans we have seen are white.
- Therefore, all swans are white.
The classic philosophical treatment of the problem of induction was given by the Scottish philosopher David Hume. Hume highlighted the fact that our everyday habits of mind depend on drawing uncertain conclusions from our relatively limited experiences rather than on deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, despite no guarantee that it will do so. Hume argued that it is impossible to justify inductive reasoning: specifically, that it cannot be justified deductively, so our only option is to justify it inductively. Since this is circular he concluded that it is impossible to justify induction.
However, Hume then stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position of severe skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted.
Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions. As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the most logical conclusion based on the clues. Examples of these biases include the availability heuristic, confirmation bias, and the predictable-world bias.
The availability heuristic causes the reasoner to depend primarily upon information that is readily available to him/her. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents would choose the causes that have been most prevalent in the media such as terrorism, and murders, and airplane accidents rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around him/her.
The confirmation bias is based on the natural tendency to confirm rather than to deny a current hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is in fact a sociable individual.
The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist. A major aspect of this bias is superstition, which is derived from the inability to acknowledge that coincidences are merely coincidences. Gambling, for example, is one of the most obvious forms of predictable-world bias. Gamblers often begin to think that they see patterns in the outcomes and, therefore, believe that they are able to predict outcomes based upon what they have witnessed. In reality, however, the outcomes of these games are difficult, if not impossible to predict. The perception of order arises from wishful thinking. Since people constantly seek some type of order to explain or justify their beliefs and experiences, it is difficult for them to acknowledge that the perceived or assumed order may be entirely different from what they believe they are experiencing.
- The proportion Q of the sample has attribute A.
- The proportion Q of the population has attribute A.
There are 20 balls—either black or white—in an urn. To estimate their respective numbers, you draw a sample of four balls and find that three are black and one is white. A good inductive generalization would be that there are 15 black, and five white, balls in the urn.
How much the premises support the conclusion depends upon (a) the number in the sample group, (b) the number in the population, and (c) the degree to which the sample represents the population (which may be achieved by taking a random sample). The hasty generalization and the biased sample are generalization fallacies.
Statistical syllogism
A statistical syllogism proceeds from a generalization to a conclusion about an individual.
- A proportion Q of population P has attribute A.
- An individual X is a member of P.
- There is a probability which corresponds to Q that X has A.
Simple induction
Simple induction proceeds from a premise about a sample group to a conclusion about another individual.
- Proportion Q of the known instances of population P has attribute A.
- Individual I is another member of P.
- There is a probability corresponding to Q that I has A.
This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.
Argument from analogy
The process of analogical inference involves noting the shared properties of two or more things, and from this basis inferring that they also share some further property:
- P and Q are similar in respect to properties a, b, and c.
- Object P has been observed to have further property x.
- Therefore, Q probably has property x also.
Analogical reasoning is very frequent in common sense, science, philosophy and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning. For more information on inferences by analogy, see Juthe, 2005.
Causal inference
A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.
A prediction draws a conclusion about a future individual from a past sample.
- Proportion Q of observed members of group G have had attribute A.
- There is a probability corresponding to Q that other members of group G will have attribute A when next observed.
Bayesian inference
As a logic of induction rather than a theory of belief, Bayesian inference does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by committing to a prior probability for a hypothesis based on logic or previous experience, and when faced with evidence, we adjust the strength of our belief in that hypothesis in a precise manner using Bayesian logic.
Inductive inference
Around 1960, Ray Solomonoff founded the theory of universal inductive inference, the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combines algorithmic information theory with the Bayesian framework. Universal inductive inference is based on solid philosophical foundations and can be considered as a mathematically formalized Occam's razor. Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity.
See also
- Abductive reasoning
- Deductive reasoning
- Inductive Logic Programming
- Inductive reasoning aptitude
- Inferential statistics
- Lateral thinking
- Laurence Jonathan Cohen
- Logical positivism
- Open world assumption
- Machine learning
- Mathematical induction
- Mill's Methods
- Raven paradox
- Deduction & Induction, Research Methods Knowledge Base
- Carlson, N.R. & Heth, C.D.(2009).Psychology the Science of Behavior.Toronto:Pearson Education Canada
- "Deductive and Inductive Arguments", Internet Encyclopedia of Philosophy, "Some dictionaries define "deduction" as reasoning from the general to specific and "induction" as reasoning from the specific to the general. While this usage is still sometimes found even in philosophical and mathematical contexts, for the most part, it is outdated."
- John Vickers. The Problem of Induction. The Stanford Encyclopedia of Philosophy.
- Herms, D. "Logical Basis of Hypothesis Testing in Scientific Research" (pdf).
- Sextus Empiricus, Outlines Of Pyrrhonism. Trans. R.G. Bury, Harvard University Press, Cambridge, Massachusetts, 1933, p. 283.
- Karl R. Popper, David W. Miller. "A proof of the impossibility of inductive probability." Nature 302 (1983), 687–688.
- Vickers, John. "The Problem of Induction" (Section 2). Stanford Encyclopedia of Philosophy. 21 June 2010
- Vickers, John. "The Problem of Induction" (Section 2.1). Stanford Encyclopedia of Philosophy. 21 June 2010.
- Gray, Peter. Psychology. New York: Worth, 2011. Print.
- Baronett, Stan (2008). Logic. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 321–325.
- Samuel Rathmanner and Marcus Hutter. A philosophical treatise of universal induction. Entropy, 13(6):1076–1136, 2011
- Herms, D. "Logical Basis of Hypothesis Testing in Scientific Research" (pdf).
- Kemerling, G (27 October 2001). "Causal Reasoning".
- Holland, JH; Holyoak KJ; Nisbett RE; Thagard PR (1989). Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA, USA: MIT Press. ISBN 0-262-58096-9.
- Holyoak, K; Morrison R (2005). The Cambridge Handbook of Thinking and Reasoning. New York: Cambridge University Press. ISBN 978-0-521-82417-0.
|Look up inductive reasoning in Wiktionary, the free dictionary.|
|Wikisource has the text of a 1920 Encyclopedia Americana article about Inductive reasoning.|
- Inductive Logic entry in the Stanford Encyclopedia of Philosophy
- Inductive reasoning at PhilPapers
- Inductive reasoning at the Indiana Philosophy Ontology Project
- Inductive reasoning entry in the Internet Encyclopedia of Philosophy
- Four Varieties of Inductive Argument from the Department of Philosophy, University of North Carolina at Greensboro.
- PDF (166 KiB), a psychological review by Evan Heit of the University of California, Merced.
- The Mind, Limber An article which employs the film The Big Lebowski to explain the value of inductive reasoning. | http://en.wikipedia.org/wiki/Induction_(philosophy) | 13 |
20 | This article discusses the basic characteristics of nonparametric statistical tests, contrasting them with the characteristics of parametric statistical tests. Examples for performing nonparametric statistical tests on practitioners' own data also are included.
Nonparametric tests can be used with data that are of the nominative (e.g., characteristics such as right and left, male and female) and ordinal (e.g., mild, moderate and severe) levels of measurement, which may not follow the normal distribution curve or comply with other assumptions required of data analyzed by parametric statistical methods. However, the results of analyzing data with these nonparametric statistical methods can yield important information about the degree to which qualities of one group of data differ from those of another group.
Statistical tests have been developed to permit comparisons regarding the degree to which qualities of one group of data differ from those of another group. Each statistical test is based on certain assumptions about the population(s) from which the data are drawn. If a particular statistical test is used to analyze data collected from a sample that does not meet the expected assumptions, then the conclusions drawn from the results of the test will be flawed.
The two classes of statistical tests are called parametric and nonparametric. The word parametric comes from "metric," meaning to measure, and "para," meaning beside or closely related; the combined term refers to the assumptions about the population from which the measurements were obtained. Nonparametric data do not meet such rigid assumptions. Nonparametric tests sometimes are referred to as "distribution-free." That is, the data can be drawn from a sample that may not follow the normal distribution (1).
Before a parametric test can be undertaken, it must be ascertained that: 1) the samples are random (i.e., each member of the population has an equal chance of being selected for measurement); 2) the scores are independent of each other; 3) the experiments are repeatable with constancy of measurements from experiment to experiment; 4) the data are normally distributed; and 5) the samples have similar variances (1,2). Parametric statistics use mean values, standard deviation and variance to estimate differences between measurements that characterize particular populations.
The two major types of parametric tests are Student's t-tests (Student is the name of the statistician who developed the test) and analyses of variance (ANOVA). Nonparametnc tests use rank or frequency information to draw conclusions about differences between populations. Parametric tests usually are assumed to be more powerful than nonparametric tests. However, parametric tests cannot always be used to analyze the significance of differences because the assumptions on which they are based are not always met.
A statistical test can never establish the truth of an hypothesis with 100-percent certainty. Typically, the hypothesis is specified in the form of a "null hypothesis," i.e., the score characterizing one group of measurements does not differ (within an acceptable margin of measurement error) from the score characterizing another group. Note the hypothesis does not state the two scores are the same; rather, it states no significant difference can be detected. Performing the statistical procedure yields a test result that helps one reach a decision that 1) the scores are not different (the hypothesis is confirmed) or 2) the difference in the scores is too great to be explained by chance (the hypothesis is rejected).
Rejecting the hypothesis when it actually is true is called a Type-I error. Failure to reject the hypothesis when it is false is termed a Type-II error. For convenience and simplicity, a 5-percent risk of making a Type-I error has become conventional; one should be correct 95 out of 100 times when using the listed value in the probability tables to accept or reject the hypothesis.
In statistics, robustness is the degree to which a test can stray from the assumptions before changing the confidence you have in the result of the statistical test you have used. Choosing a nonparametric test trades the power, or large sample size (3), of the parametric test for robustness (4). Further, a method requiring few and weak assumptions about the population(s) being sampled is less dependable than the corresponding parametric method and increases the chances of committing a Type-Il error (5,6).
Independence or dependence of samples concerns whether the different sets of numbers being compared are independent or dependent of each other (7). Sets are independent when values in one set tell nothing about values in another set. When two or more groups consist of different, unrelated individuals, the observations made about the samples are independent. When the sets of numbers consist of repeated measures on the same individuals, they are said to be dependent. Similarly, if male and female characteristics are compared using brother-sister pairs, the samples are dependent. Matching two or more groups of individuals on factors such as income, education, age, height and weight also yields dependent samples.
This type of selection sometimes is difficult to confirm. However, so long as the data sets used in the analysis are relatively normally distributed, the robustness of most parametric tests still provides an appropriate level of rejection of the null hypothesis.
Homogeneity of variance of the data from each group being compared must be equal (homogeneous) and can be tested statistically (8). If found to differ significantly, then nonparametric tests must be used.
Parametric tests require data from which means and variances can be calculated, i.e., interval and ratio data. Some statisticians also support the use of parametric tests with ordinal-scaled values because the distribution of ordinal data often is approximately normal. As long as the actual data meet the parametric assumptions, regardless of the origin of the numbers, then parametric tests can be conducted. As is the case with all statistical tests of differences, the researcher must interpret parametric statistical conclusions based on ordinal data in light of their clinical or practical implications.
Nonparametric tests are used in the behavioral sciences when there is no basis for assuming certain types of distributions. Siegel has advocated that nonparametric tests be used for nominal and ordinal levels of measurements while parametric tests be used for analyzing interval and ratio data (4).
On the other hand, Williamsen has argued that statistical tests are selected to meet certain goals or to answer specific questions rather than to match certain levels of measurement with parametric or nonparametric procedures (9). This view currently is prevalent among many statisticians.
In practice, levels of measurement sometimes are "downgraded" from ratio and interval scales to ordinal or nominal scales for the convenience of a measuring instrument or interpretation. For example, muscular strength (measured with a force gauge and considering the length of the lever arm through which the force is acting) is a variable that yields ratio data because a true zero point exists in the level of measurement. Muscular strength is absent with paralysis (true zero point). The manual muscle test converts the ratio characteristic of force into an ordinal scale by assigning grades of relative position (normal, good, fair, poor, trace, zero; or 5,4, 3,2, 1,0).
Nonparametric tests should not be substituted for parametric tests when parametric tests are more appropriate. Nonparametric tests should be used when the assumptions of parametric tests cannot be met, when very small numbers of data are used, and when no basis exists for assuming certain types or shapes of distributions (9).
Nonparametric tests are used if data can only be classified, counted or ordered-for example, rating staff on performance or comparing results from manual muscle tests. These tests should not be used in determining precision or accuracy of instruments because the tests are lacking in both areas.
Nonparametric tests usually can be performed quickly and easily without automated instruments (calculators and computers). They are designed for small numbers of data, including counts, classifications and ratings. They are easier to understand and explain.
Calculations of nonparametric tests generally are easy to perform and apply, and they have certain intuitive appeal as shortcut techniques. Nonparametric tests are relatively robust and can be used effectively for determining relationships and significance of differences using behavioral research methods.
Parametric tests are more powerful than nonparametric tests and deal with continuous variables whereas nonparametric tests often deal with discrete variables (10). Using results from analyses of nonparametric tests for making inferences should be done with caution because small numbers of data are used, and no assumptions about parent populations are made. The ease of calculation and reduced concern for assumptions have been referred to as "quick and dirty statistical procedures" (11).
Descriptive statistics involve tabulating, depicting and describing collections of data. These data may be either quantitative, such as measures of leg length (variables that are characterized by an underlying continuum) or representative of qualitative variables, such as gender, vocational status or personality type.
Collections of data generally must be summarized in some fashion to be more easily understood. Descriptive statistics serve as the means to describe, summarize and reduce to manageable form the properties of an otherwise unwieldy mass of data. Descriptive statistics used to characterize data analyzed by parametric tests include the mean, standard deviation and variance.
Those descriptive statistics used to characterize data analyzed by nonparametric tests include the mode, median and percentile rank:
where Ri is the rank of the observation Xi (ranked from highest to lowest), and n is the number of observations in the distribution. The median is the 50th percentile.
In statistics, the mean or median commonly is used when dealing with measurement data. The mode most often is useful when dealing with data more appropriately handled with classification procedures (e.g., mild, moderate, severe).
Correlation coefficients are used to reveal the nature and extent of association between two variables. Each method used to determine a correlation coefficient has conditions that must be met for its use to be appropriate. The first step in analyzing a relationship always is selection of the proper measure of association based on the conditions of the study and the hypothesis to be tested.
Measures of association are useful for a variety of studies. Correlation coefficients are used in exploratory studies to determine relationships among variables in new study areas. The results of such studies allow investigators to formulate further research questions or hypotheses to delve more deeply into the study area. In some studies, the hypotheses focus on associations between selected variables, and the correlation coefficients serve to test these hypotheses.
Similarly, hypotheses based on expected associations among variables make important contributions to theory building.
Finally, correlation coefficients are used to manage threats to validity in experimental and quasi-experimental studies. They can be used to test the credibility of findings when groups have been compared by checking on the association of independent and extraneous variables with the dependent variable.
Spearman's rank order correlation coefficient rho is a nonparametric method of computing correlation from ranks. The method is similar to that used to compute Pearson's correlation coefficient (a parametric test), with the computed value rho providing an index of relation between two groups of ranks.
If the original scores are ranks, the computed index will be similar in value to that computed by the Pearson (product moment) method. The difference between the two methods is the product moment method assigns weight to the magnitude of each score whereas the rank method focuses on the ordinal position of each score (9). The coefficient of rank correlation (rho) ranges from +1, when paired ranks are changing in the same order, to -1, when ranks are changing in reverse order. A score of zero indicates the paired ranks are occurring at random. The equation for rank correlation is:
where d is the difference between each subject's rank for the two variables being studied, Ed2 is the sum of squared differences between ranks, 6 is a constant and n is the number of paired scores.
Suppose 10 students are drawn at random from a large class; each student has been rated on a 10-point scale for a recent clinical experience, and each student has a grade-point average (GPA) on file. The coefficient of ranks can be computed to determine the extent of agreement between the two sets of scores (clinical experience ratings and GPA). In the rank correlation method, the raw scores are replaced by assigning an appropriate rank to each score of each set. Ranks for each set correspond to the total number of scores in that set.
Step 1. Make a table of the subjects' scores and ranks for the two variables of interest and subtract the ranks to determine the difference (diff) for each pair of ranks. Square each of these differences and sum the squared values (see Table A ).
This example illustrates what happens when scores are similar (tied ranks). When tied ranks occur (e.g., column Y of Table A), each score is assigned the average rank the tied scores occupy (a higher rank is better). The GPA of 3.2, for example, had two scores occupying ranks 5 and 6. The average rank for the score 3.2 is obtained by adding ranks (5 +6) and dividing by the number of ranks occupied (e.g., 5 +6 + 2= 5.5 ranking).
Step 2. Substitute the calculated value of Ed2 in Equation (2) and solve for p:
Consulting a textbook of statistics that provides a table of values for p, one finds a minimum p value of 0.746 is needed to be considered significant at the .05 level of significance. Thus, the correlation coefficient p of 0.867 confirms a statistically significant correlation between the two sets of rankings (a conclusion that will be incorrect less than five times out of 100).
Kendall's rank correlation tau ('r) is another nonparametric measure of association. When relatively large numbers of ties exist in a set of ranking, Kendall's tau is preferred over Spearman's rho. The formula and procedures for calculating it have been adapted from Siegel (4).
where N = the number of objects or individuals ranked on both X and Y characteristics.
The value of S can be determined by arranging the first set of measurements (see Table B and Table C ) into their natural order (e.g., 1, 2, 3, 4, 5) and aligning the second set of measurements under them (e.g., 2, 1,4, 5, 3). Starting with the first number on the left in the bottom row, the number of ranks on the right which are larger are counted. The derivations of the actual score and the maximum possible score are illustrated in the example that follows: Two orthotists rank the fit of an "off-the-shelf" ankle-foot orthosis (AFO) on five different patients. The two sets of rankings appear in Table B .
Step 1. Rearrange the data so the first orthotist's rankings fall in a natural (increasing) order and the second orthotist's rankings are tabulated in the same order (see Table C ).
Step 2. Compare the first ranking of orthotist 2 with every ranking to its right, assigning a +1 to every pair in which the order is natural and a -1 to every pair in which the order is unnatural:
Repeat for each subsequent ranking of orthotist 2:
Step 3. Add these measures of "disarray" (sum = 4) and enter this sum in the above formula as a substitute for S.
Step 4. The value of N = 5. Thus, Equation (3) becomes:
Step 5. The statistical significance can be determined by two procedures, depending on sample size.
If N is equal to or less than 10, use a probability table such as that found in the appendix of a textbook on statistics to find the statistical significance of 'r. In this example, the table of probability indicates a probability score (p-value) of 0.242 for a value of 0.400. Thus, this test supports the conclusion that the ratings of the two orthotists are not significantly correlated.
For situations in which N is greater than 10, a z score can be computed for the 'r obtained and the statistical significance of the correlation read from a corresponding table of z scores:
The Chi-square test of independence is a nonparametric test designed to determine whether two variables are independent or related. This test is designed to be used with data that are expressed as frequencies; it should not be used to analyze data expressed as proportions (percentages) unless they are first converted to frequencies.
The application of Chi-square to contingency tables can best be illustrated by working through an example. Suppose a sample of new graduates of an orthotic educational program and orthotists with more than five years of clinical experience were asked whether research should be a part of every orthotist's practice. The replies were recorded as "Agree" or "Disagree."
Step 1. Organize the data into the form of a 2 x 2 contingency table (see Table D ). Note the table includes row totals, column totals and the grand total of subjects included in the sample.
The actual numbers of "Agree" responses were 82 from recent graduates and 30 from experienced orthotists. The numbers disagreeing with the statement were 12 and 66, respectively.
The rationale that underlies Chi-square is based on the differences between the observed and the expected frequencies. The observed frequencies are the data produced by the survey. The expected frequencies are computed on the assumption that no difference existed between the groups except that resulting from chance occurrences.
Step 2. The expected frequencies are computed as follows:
Cross-tabulation and the computation of Chi-square can be made when the variables are nominal as well as ordinal, interval or ratio, and the Chi-square statistic is useful for discrete or continuous variables. However, it is assumed that data occur in every category; thus, no cell may have an observed frequency of zero. The formula for the degrees of freedom for calculating Chi-square and the contingency coefficient is:
where k = number of columns in the contingency table and r = number of rows in the contingency table.
Step 3. The Chi-square (X2) is calculated using Equation (6):
where 0 is the observed number of cases found in the ith row of the ith column, and E is the expected frequency obtained by multiplying the two marginal totals for each cell and dividing the product by N
Step 4. The Chi-square is computed by finding the difference between the observed and expected frequencies in each cell, squaring that difference and dividing by the expected frequency of that cell. The result for each cell is then added (see Table E ), and the total is the value of the Chi2 square. (Chi-square = X2 = 61.84.)
Step 5. Consulting a table of Chi-square values in a textbook of statistics, using 1 degree of freedom and the 0.05-level of significance, we find that a minimum value of 3.84 is needed for the observed frequency to be considered significantly different from the expected frequency. In this example, the value of X greatly exceeds that minimum value; thus, the observed values are significantly different from the expected values.
The use of the Chi-square statistic has important limitations. Although no association is indicated by a zero, a perfect association is not indicated by a 1.00. Moreover, the size of Chi-square is influenced by both the size of the contingency table and the size of the sample.
The addition of rows and columns as a table grows is accompanied by larger and larger values of Chi-square- even when the association remains essentially constant. If the sample size is tripled, the value of Chi-square is tripled, and everything else remains the same. Degrees of freedom depend on the number of rows and columns, not the sample size; thus, inflated values of Chi-square occur for large samples, leading the investigator to conclude the differences between observed and expected frequencies are more significant than warranted. The Chi-square is designed for use with relatively small samples and a limited number of rows and columns.
The correlation coefficient phi corrects for the size of the sample when the table size is 2 x 2. The equation is:
Phi is 0 when no relationship exists and 1 when variables are related perfectly. When tables are greater than 2 x 2, Phi has no upper limit and is not a suitable statistic to use. The statistical significance of Phi may be tested by calculating a corresponding Chi-square value and assigning 1 degree of freedom to it (12):
Cramer's V is an adjusted CF, modified to be suitable for tables larger than 2 x 2. The value of V is zero when no relationship exists and 1 when a perfect relationship exists. The equation for Cramer's V (13) is:
Thus, when 2 x 2 tables are involved, Phi may provide a more useful measure of the relationship between the two variables than that provided by Chi-square. For tables larger than 2 x 2, Cramer's V is the statistic of choice (k = number of columns in the table).
Two-Group Design: Chi-square (2 x 2)
The Chi-square comparison of differences between two groups is one of the better known and commonly used statistical procedures. The same procedure for Chi-square (x2) as described above can be used to test for the significance of differences in two groups of data that are expressed as frequencies.
Suppose researchers wanted to determine if the proportion of trauma patients being referred for orthotic services in a particular hospital was significantly different than the number being referred for orthotic services in another hospital with a similar mix of patients. During a specific 12-month period, the orthotic department in Hospital A filled 238 requests for orthoses from a pool of 2,222 patients, and the orthotic department in Hospital B filled 221 requests for orthoses from a pool of 1,238 patients. First, the data are organized into a 2 x 2 contingency table (see Table F ).
As before, the general equation for Chi-square is:
To compute X for a contingency table, simply square the difference between the observed and expected frequencies in each cell and divide by the expected frequency of that cell. Finally, total the cells to obtain the X2value (see Table G ). X2= 23.72, which is evidence that the experiences of the two hospitals are significantly different.
The Chi-square median test can be used to determine if the medians of two groups are different. For example, all of the male patients fitted with an AFO to correct foot-drop following the onset of hemiplegia were asked to rate the comfort of their footwear when walking with the AFO. Forty-four patients were evaluated; 32 wore normal leather shoes and 12 wore tennis shoes.
Comfort was rated on a nine-point scale (the larger the score, the greater the comfort in walking), and the evaluation was made six months after fitting the AFO and with the subject walking 50 yards. The median comfort rating of the 44 patients was 7.3. The number of subjects rating their comfort above or below the grand median is shown in Table H .
The Chi-square computation viewing the leather shoe and tennis shoe wearers as random samples is shown below. The ratings are discrete units; each patient's rating appears only once, and the ratings are independent.
where n1 the total number of observations, nrc2 is the number of observations in the rcth cell of the contingency table, nr is the number of observations in the rth row of the table, and nc is the number of observations in the cth column of the table.
A table of Chi-square values is then consulted to determine if this calculated value of Chi-square is sufficiently large to represent a statistically significant difference of the mean scores. The degree of freedom is (R - 1) (C - 1) = (1)(1) = 1. In this example, a Chi-square of 3.84 or larger would be needed (at a .05 level of significance) to justify the conclusion that the comfort levels of the two different types of footwear were significantly different.
Tukey's quick test is used to determine if the results of two different interventions produced the same or different effects. Suppose a sample of 20 patients with limitation in elbow extension on one side that exceeded 50 degrees was treated with one of two methods for reducing contractures (7). Subgroup A, consisting of 10 patients, was treated with serial casting over a period of one month; Subgroup B, also consisting of 10 patients, was treated with an adjustable splint worn 18 hours a day for one month. The increased range of motion for subjects in the two groups is shown in Table I .
Tukey's quick test is applied by identifying the group containing the largest value and the group containing the smallest value in the two groups. In this example, Group B contains the largest value of either group (41), and Group A contains the smallest value (14). The number of values in Group B that are larger than the largest value in Group A (36) are counted and recorded (in this example, there are 2). Next, the number of values in Group A that are smaller than the smallest value in Group B (18) are counted (there are 2). The two counts are added, and, if the sum is equal to or greater than 7, we conclude the effects of the two treatments are different. If the sum is less than 7, we conclude the effects of the two treatments are not different. In the present example, the sums of the two counts equal 4; therefore, we conclude the effects of the two interventions are not different. In the event the largest and the smallest values occurred in the same group, we conclude automatically that the two treatments did not have different effects. In Tukey's test the number 7 is a constant and is the criterion value to be used with any set of data.
The Mann-Whitney U-test is a rank test for two independent samples, each with a small number of subjects. This test is a good alternative to the parametric t-test. Suppose measurements of the height of the ankle joint axis (in millimeters) in a group of patients receiving services in Orthotic Clinic A are compared with measurements taken from a group of patients in Orthotic Clinic B to determine if they are comparable. Because of the small number of cases, a nonparametric test is selected. The measurements are assigned a rank in ascending order of height, with a rank of 1 being the smallest value:
The ranks then are ordered according to their identity.
The value of the Mann-Whitney U-test is found by determining the number of A scores preceding each B score. The U is: 1 + 2 +3 + 4 + 5 = 18 (rank 2A precedes 3B = +1; ranks 2A and 4A precede 5B = +2; ranks 2A, 4A and 6A precede 7B = +3, and so on). Consulting a Mann-Whitney U-test table for nB = 7 (larger sample size), locate the U value (18) on the left-hand margin and nA = 5 at the top of the table. The probability that these two samples are equivalent is 0.562, which is not statistically significant (i.e., the distribution of ankle heights is not different). This procedure is appropriate only when the larger sample size is 8 or smaller. Different procedures and tables are used for samples ranging between 9 and 20 and larger than 20, respectively. Procedures and tables for the Mann-Whitney U-test can be found in Siegel and Castellan (14).
The Wilcoxon matched pairs/rank test is an alternate form of the Mann-Whitney test that is used when the samples are dependent. For purposes of illustration, presume the time to ambulate 25 meters is measured with a stopwatch when the patient is wearing a new type of lightweight KAFO and again when wearing a conventional metal KAFO. The ambulation times for each patient are tabulated, and the absolute difference between each pair of numbers is calculated. The nonzero differences then are ranked according to their absolute values and separated into ranks associated with positive and negative differences. Table J shows the sums of the positive and negative ranks for this example.
As in the case with the Mann-Whitney procedures for analyzing differences between independent samples, the resulting score, in this case called a T value, is used to look up the statistical significance of the differences in a table (for example, Table G in the appendix of Reference 4). In this example, a T value of 8 or more indicates that the two situations are significantly different, and the subjects walked more quickly when wearing the lightweight KAFO.
This method functions like the conventional one-way analysis of variance. The null hypothesis is tested to determine if the differences among samples show true population differences or whether they represent chance variations to be expected among several samples from the same population. The test is based on the assumptions that ranks within each sample constitute a random sample for the set of numbers 1 through N (15) and that the variable being tested has a continuous distribution (4). Scores in all samples are combined and arranged in order of magnitude so that ranks can be given to each score. The lowest score is assigned the rank of 1. The scores then are replaced in their respective samples with appropriate ranks. The ranks for each sample are summed. The assumption is that mean rank sums (R) are equal for all samples and equal to the mean of the N ranks, (N + 1)12, if the samples (K) are from the same population (16). Both equal- and unequalsized samples can be used in this test because the sums of sample ranks (>R) are pooled in the equation. The statistic H used in this test can be defined by the equation:
H = 12/[N(N+1)]
where N is the number of scores in all samples combined. The random sample distribution of H is approximated by a Chi-square distribution of K-1 degrees of freedom, where K is the number of samples. The Chi-square probability can be found in appendix tables published in Reference 11. The Kruskal-Wallis One-Way Analysis of Variance by Ranks is used when assumptions for the parametric Analysis of Variance are not suitable for the data, or when the level of data is less than interval measures.
This article provides students and clinicians in the field of prosthetics/orthotics with basic information about the distinctions between parametric and nonparametric statistical methods. Knowledge of these distinctions is essential in reaching a decision about which statistical method would be appropriate for testing the strength of a correlation between two sets of data or determining if the differences between two sets of observations are great enough to be considered significant from a statistical point of view. One still must make the judgment if the difference is of clinical significance.
Some of the most commonly used nonparametric statistical methods have been described in sufficient detail that readers should be able to use the method to answer questions pertinent to their own practices-using data accumulated in their own setting.
L. DON LEHMKUHL, PHD, FAPTA, is associate professor in the department of physical medicine and rehabilitation at Baylor College of Medicine, Houston, TX 77030. | http://www.oandp.org/jpo/library/1996_03_105.asp | 13 |
38 | |Part of a series of articles on|
Carbon nanotubes (CNTs) are allotropes of carbon with a cylindrical nanostructure. Nanotubes have been constructed with length-to-diameter ratio of up to 132,000,000:1, significantly larger than for any other material. These cylindrical carbon molecules have unusual properties, which are valuable for nanotechnology, electronics, optics and other fields of materials science and technology. In particular, owing to their extraordinary thermal conductivity and mechanical and electrical properties, carbon nanotubes find applications as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, or car parts.
Nanotubes are members of the fullerene structural family. Their name is derived from their long, hollow structure with the walls formed by one-atom-thick sheets of carbon, called graphene. These sheets are rolled at specific and discrete ("chiral") angles, and the combination of the rolling angle and radius decides the nanotube properties; for example, whether the individual nanotube shell is a metal or semiconductor. Nanotubes are categorized as single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs). Individual nanotubes naturally align themselves into "ropes" held together by van der Waals forces, more specifically, pi-stacking.
Applied quantum chemistry, specifically, orbital hybridization best describes chemical bonding in nanotubes. The chemical bonding of nanotubes is composed entirely of sp2 bonds, similar to those of graphite. These bonds, which are stronger than the sp3 bonds found in alkanes and diamond, provide nanotubes with their unique strength.
There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple" or "multi", and the letter C is often omitted in the abbreviation; for example, multi-walled carbon nanotube (MWNT).
Most single-walled nanotubes (SWNT) have a diameter of close to 1 nanometer, with a tube length that can be many millions of times longer. The structure of a SWNT can be conceptualized by wrapping a one-atom-thick layer of graphite called graphene into a seamless cylinder. The way the graphene sheet is wrapped is represented by a pair of indices (n,m). The integers n and m denote the number of unit vectors along two directions in the honeycomb crystal lattice of graphene. If m = 0, the nanotubes are called zigzag nanotubes, and if n = m, the nanotubes are called armchair nanotubes. Otherwise, they are called chiral. The diameter of an ideal nanotube can be calculated from its (n,m) indices as follows
where a = 0.246 nm.
SWNTs are an important variety of carbon nanotube because most of their properties change significantly with the (n,m) values, and this dependence is non-monotonic (see Kataura plot). In particular, their band gap can vary from zero to about 2 eV and their electrical conductivity can show metallic or semiconducting behavior. Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is the electric wire, and SWNTs with diameters of an order of a nanometer can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to protect half of an SWNT from oxygen exposure, while exposing the other half to oxygen. This results in a single SWNT that acts as a NOT logic gate with both p and n-type FETs within the same molecule.
Single-walled nanotubes are dropping precipitously in price, from around $1500 per gram as of 2000 to retail prices of around $50 per gram of as-produced 40–60% by weight SWNTs as of March 2010.
Multi-walled nanotubes (MWNT) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.
Double-walled carbon nanotubes (DWNT) form a special class of nanotubes because their morphology and properties are similar to those of SWNT but their resistance to chemicals is significantly improved. This is especially important when functionalization is required (this means grafting of chemical functions at the surface of the nanotubes) to add new properties to the CNT. In the case of SWNT, covalent functionalization will break some C=C double bonds, leaving "holes" in the structure on the nanotube and, thus, modifying both its mechanical and electrical properties. In the case of DWNT, only the outer wall is modified. DWNT synthesis on the gram-scale was first proposed in 2003 by the CCVD technique, from the selective reduction of oxide solutions in methane and hydrogen.
The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as main movable arms in coming nanomechanical devices. Retraction force that occurs to telescopic motion caused by the Lennard-Jones interaction between shells and its value is about 1.5 nN.
In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on radius of the torus and radius of the tube.
Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite’s mechanical properties.
Graphenated carbon nanotubes (g-CNTs)
Graphenated CNTs are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo style CNTs. Yu et al. reported on "chemically bonded graphene leaves" growing along the sidewalls of CNTs. Stoner et al. described these structures as "graphenated CNTs" and reported in their use for enhanced supercapacitor performance. Hsu et al. further reported on similar structures formed on carbon fiber paper, also for use in supercapacitor applications. The foliate density can vary as a function of deposition conditions (e.g. temperature and time) with their structure ranging from few layers of graphene (< 10) to thicker, more graphite-like.
The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Graphene edges provide significantly higher charge density and reactivity than the basal plane, but they are difficult to arrange in a three-dimensional, high volume-density geometry. CNTs are readily aligned in a high density geometry (i.e., a vertically aligned forest) but lack high charge density surfaces—the sidewalls of the CNTs are similar to the basal plane of graphene and exhibit low charge density except where edge defects exist. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures.
Nitrogen Doped Carbon Nanotubes
Nitrogen doped carbon nanotubes (N-CNT's), can be produced through 5 main methods, Chemical Vapor Deposition, high-temperature and high-pressure reactions, gas-solid reaction of amorphous carbon with NH3 at high temperature, solid reaction, and solvothermal synthesis.
N-CNTs can also be prepared by a CVD method of pyrolysizing melamine under Ar at elevated temperatures of 800oC - 980oC. However synthesis via CVD and melamine results in the formation of bamboo structured CNTs. XPS spectra of grown N-CNT's reveals nitrogen in five main components, pyridinic nitrogen, pyrrolic nitrogen, quaternanry nitrogen, and nitrogen oxides. Furthermore synthesis temperature affects the type of nitrogen configuration.
Nitrogen doping plays a pivotal role in Lithium storage. N-doping provides defects in the walls of CNT's allowing for Li ions to diffuse into interwall space. It also increases capacity by providing more favorable bind of N-doped sites. N-CNT's are also much more reactive to metal oxide nanoparticle deposition which can further enhance storage capacity, especially in anode materials for Li-ion batteries. However Boron doped nanotubes have been shown to make batteries with triple capacity.
A Carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiating. It can also be applied as an oscillator during theoretical investigations and predictions.
Cup-stacked carbon nanotubes
Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behaviors due to the stacking microstructure of graphene layers.
Extreme carbon nanotubes
The observation of the longest carbon nanotubes (18.5 cm long) was reported in 2009. These nanotubes were grown on Si substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes.
The thinnest carbon nanotube is armchair (2,2) CNT with a diameter of 3 Å. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of carbon nanotube type was done by combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy and density functional theory (DFT) calculations.
The thinnest freestanding single-walled carbon nanotube is about 4.3 Å in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but exact type of carbon nanotube remains questionable. (3,3), (4,3) and (5,1) carbon nanotubes (all about 4 Å in diameter) were unambiguously identified using more precise aberration-corrected high-resolution transmission electron microscopy. However, they were found inside of double-walled carbon nanotubes.
Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus respectively. This strength results from the covalent sp2 bonds formed between the individual carbon atoms. In 2000, a multi-walled carbon nanotube was tested to have a tensile strength of 63 gigapascals (GPa). (For illustration, this translates into the ability to endure tension of a weight equivalent to 6422 kg (14,158 lbs) on a cable with cross-section of 1 mm2.) Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ~100 GPa, which is in agreement with quantum/atomistic models. Since carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm3, its specific strength of up to 48,000 kN·m·kg−1 is the best of known materials, compared to high-carbon steel's 154 kN·m·kg−1.
Under excessive tensile strain, the tubes will undergo plastic deformation, which means the deformation is permanent. This deformation begins at strains of approximately 5% and can increase the maximum strain the tubes undergo before fracture by releasing strain energy.
Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes leads to significant reductions in the effective strength of multi-walled carbon nanotubes and carbon nanotube bundles down to only a few GPa’s. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ~60 GPa for multi-walled carbon nanotubes and ~17 GPa for double-walled carbon nanotube bundles.
CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.
|Material||Young's modulus (TPa)||Tensile strength (GPa)||Elongation at break (%)|
|SWNTE||~1 (from 1 to 5)||13–53||16|
EExperimental observation; TTheoretical prediction
The above discussion referred to axial properties of the nanotube, whereas simple geometrical considerations suggest that carbon nanotubes should be much softer in the radial direction than along the tube axis. Indeed, TEM observation of radial elasticity suggested that even the van der Waals forces can deform two adjacent nanotubes. Nanoindentation experiments, performed by several groups on multiwalled carbon nanotubes and tapping/contact mode atomic force microscope measurement performed on single-walled carbon nanotube, indicated Young's modulus of the order of several GPa confirming that CNTs are indeed rather soft in the radial direction.
Standard single-walled carbon nanotubes can withstand a pressure up to 24 GPa without deformation. They then undergo a transformation to superhard phase nanotubes. Maximum pressures measured using current experimental techniques are around 55 GPa. However, these new superhard phase nanotubes collapse at an even higher, albeit unknown, pressure.
Kinetic properties
Multi-walled nanotubes are multiple concentric nanotubes precisely nested within one another. These exhibit a striking telescoping property whereby an inner nanotube core may slide, almost without friction, within its outer nanotube shell, thus creating an atomically perfect linear or rotational bearing. This is one of the first true examples of molecular nanotechnology, the precise positioning of atoms to create useful machines. Already, this property has been utilized to create the world's smallest rotational motor. Future applications such as a gigahertz mechanical oscillator are also envisaged.
Electrical properties
Because of the symmetry and unique electronic structure of graphene, the structure of a nanotube strongly affects its electrical properties. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3, then the nanotube is semiconducting with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting.
However, this rule has exceptions, because curvature effects in small diameter carbon nanotubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, vice versa—zigzag and chiral SWCNTs with small diameters that should be metallic have finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects current densities are limited by electromigration.
Because of their nanoscale cross-section, electrons propagate only along the tube's axis and electron transport involves quantum effects. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e2/h is the conductance of a single ballistic quantum channel.
There have been reports of intrinsic superconductivity in carbon nanotubes. Many other experiments, however, found no evidence of superconductivity, and the validity of these claims of intrinsic superconductivity remains a subject of debate.
Optical properties
EM Wave absorption
One of the more recently researched properties of multi-walled carbon nanotubes (MWNTs) is their wave absorption characteristics, specifically microwave absorption. Interest in this research is due to the current military push for radar absorbing materials (RAM) to better the stealth characteristics of aircraft and other military vehicles. There has been some research on filling MWNTs with metals, such as Fe, Ni, Co, etc., to increase the absorption effectiveness of MWNTs in the microwave regime. Thus far, this research has shown improvements in both maximum absorption and bandwidth of adequate absorption. The reason the absorptive properties changed when filled is that the complex permeability (μr) and complex permitivity (εr), shown in the equations below, have been shown to vary depending on how the MWNTs are called and what medium they are suspended in. The direct relationship between μr, εr, and the other system parameters that affect the absorption sample thickness, d, and frequency, f, is shown in the equations below, where Zin is the normalized input impedance. As shown in the equation below, these characteristics vary by frequency. Because of this, it is convenient to set a baseline reflection loss (R.L.) that is deemed effective and determine the bandwidth within a given frequency that produces the desired reflection loss. A common R.L. to use for this bandwidth determination is −10 dB, which corresponds to a loss of over 90% of the incoming wave. This bandwidth is usually maximized at the same time as the absorption is. This is done by satisfying the impedance matching condition, getting Zin = 1. In the work done at Beijing Jiaotong University it was found that Fe filled MWNTs exhibited a maximum reflection loss of −22.73 dB and had a bandwidth of 4.22 GHz for a reflection loss of −10 dB.
Thermal properties
All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators laterally to the tube axis. Measurements show that a SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. A SWNT has a room-temperature thermal conductivity across its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.
As with any material, the existence of a crystallographic defect affects the material properties. Defects can occur in the form of atomic vacancies. High levels of such defects can lower the tensile strength by up to 85%. An important example is the Stone Wales defect, which creates a pentagon and heptagon pair by rearrangement of the bonds. Because of the very small structure of CNTs, the tensile strength of the tube is dependent on its weakest segment in a similar manner to a chain, where the strength of the weakest link becomes the maximum strength of the chain.
Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monoatomic vacancies induce magnetic properties.
Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.
The toxicity of carbon nanotubes has been an important question in nanotechnology. Such research has just begun. The data are still fragmentary and subject to criticism. Preliminary results highlight the difficulties in evaluating the toxicity of this heterogeneous material. Parameters such as structure, size distribution, surface area, surface chemistry, surface charge, and agglomeration state as well as purity of the samples, have considerable impact on the reactivity of carbon nanotubes. However, available data clearly show that, under some conditions, nanotubes can cross membrane barriers, which suggests that, if raw materials reach the organs, they can induce harmful effects such as inflammatory and fibrotic reactions.
Results of rodent studies collectively show that regardless of the process by which CNTs were synthesized and the types and amounts of metals they contained, CNTs were capable of producing inflammation, epithelioid granulomas (microscopic nodules), fibrosis, and biochemical/toxicological changes in the lungs. Comparative toxicity studies in which mice were given equal weights of test materials showed that SWCNTs were more toxic than quartz, which is considered a serious occupational health hazard when chronically inhaled. As a control, ultrafine carbon black was shown to produce minimal lung responses.
The needle-like fiber shape of CNTs is similar to asbestos fibers. This raises the idea that widespread use of carbon nanotubes may lead to pleural mesothelioma, a cancer of the lining of the lungs or peritoneal mesothelioma, a cancer of the lining of the abdomen (both caused by exposure to asbestos). A recently published pilot study supports this prediction. Scientists exposed the mesothelial lining of the body cavity of mice to long multiwalled carbon nanotubes and observed asbestos-like, length-dependent, pathogenic behavior that included inflammation and formation of lesions known as granulomas. Authors of the study conclude:
This is of considerable importance, because research and business communities continue to invest heavily in carbon nanotubes for a wide range of products under the assumption that they are no more hazardous than graphite. Our results suggest the need for further research and great caution before introducing such products into the market if long-term harm is to be avoided.
Although further research is required, the available data suggests that under certain conditions, especially those involving chronic exposure, carbon nanotubes can pose a serious risk to human health.
Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, high-pressure carbon monoxide disproportionation (HiPco), and chemical vapor deposition (CVD). Most of these processes take place in vacuum or with process gases. CVD growth of CNTs can occur in vacuum or at atmospheric pressure. Large quantities of nanotubes can be synthesized by these methods; advances in catalysis and continuous growth processes are making CNTs more commercially viable.
Arc discharge
Nanotubes were observed in 1991 in the carbon soot of graphite electrodes during an arc discharge, by using a current of 100 amps, that was intended to produce fullerenes. However the first macroscopic production of carbon nanotubes was made in 1992 by two researchers at NEC's Fundamental Research Laboratory. The method used was the same as in 1991. During this process, the carbon contained in the negative electrode sublimates because of the high-discharge temperatures. Because nanotubes were initially discovered using this technique, it has been the most widely used method of nanotube synthesis.
The yield for this method is up to 30% by weight and it produces both single- and multi-walled nanotubes with lengths of up to 50 micrometers with few structural defects.
Laser ablation
In the laser ablation process, a pulsed laser vaporizes a graphite target in a high-temperature reactor while an inert gas is bled into the chamber. Nanotubes develop on the cooler surfaces of the reactor as the vaporized carbon condenses. A water-cooled surface may be included in the system to collect the nanotubes.
This process was developed by Dr. Richard Smalley and co-workers at Rice University, who at the time of the discovery of carbon nanotubes, were blasting metals with a laser to produce various metal molecules. When they heard of the existence of nanotubes they replaced the metals with graphite to create multi-walled carbon nanotubes. Later that year the team used a composite of graphite and metal catalyst particles (the best yield was from a cobalt and nickel mixture) to synthesize single-walled carbon nanotubes.
The laser ablation method yields around 70% and produces primarily single-walled carbon nanotubes with a controllable diameter determined by the reaction temperature. However, it is more expensive than either arc discharge or chemical vapor deposition.
Plasma torch
Single-walled carbon nanotubes can be synthesized by the induction thermal plasma method, discovered in 2005 by groups from the University of Sherbrooke and the National Research Council of Canada. The method is similar to the arc-discharge process in that both use ionized gas to reach the high temperature necessary to vaporize carbon containing substances and the metal catalysts necessary for the ensuing nanotube growth. The thermal plasma is induced by high frequency oscillating currents in a coil, and is maintained in flowing inert gas. Typically, a feedstock of carbon black and metal catalyst particles is fed into the plasma, and then cooled down to form single-walled carbon nanotubes. Different single-wall carbon nanotube diameter distributions can be synthesized.
The induction thermal plasma method can produce up to 2 grams of nanotube material per minute, which is higher than the arc-discharge or the laser ablation methods.
Chemical vapor deposition (CVD)
The catalytic vapor phase deposition of carbon was reported in 1952 and 1959, but it was not until 1993 that carbon nanotubes were formed by this process. In 2007, researchers at the University of Cincinnati (UC) developed a process to grow aligned carbon nanotube arrays of 18 mm length on a FirstNano ET3000 carbon nanotube growth system.
During CVD, a substrate is prepared with a layer of metal catalyst particles, most commonly nickel, cobalt, iron, or a combination. The metal nanoparticles can also be produced by other ways, including reduction of oxides or oxides solid solutions. The diameters of the nanotubes that are to be grown are related to the size of the metal particles. This can be controlled by patterned (or masked) deposition of the metal, annealing, or by plasma etching of a metal layer. The substrate is heated to approximately 700°C. To initiate the growth of nanotubes, two gases are bled into the reactor: a process gas (such as ammonia, nitrogen or hydrogen) and a carbon-containing gas (such as acetylene, ethylene, ethanol or methane). Nanotubes grow at the sites of the metal catalyst; the carbon-containing gas is broken apart at the surface of the catalyst particle, and the carbon is transported to the edges of the particle, where it forms the nanotubes. This mechanism is still being studied. The catalyst particles can stay at the tips of the growing nanotube during the growth process, or remain at the nanotube base, depending on the adhesion between the catalyst particle and the substrate. Thermal catalytic decomposition of hydrocarbon has become an active area of research and can be a promising route for the bulk production of CNTs. Fluidised bed reactor is the most widely used reactor for CNT preparation. Scale-up of the reactor is the major challenge.
CVD is a common method for the commercial production of carbon nanotubes. For this purpose, the metal nanoparticles are mixed with a catalyst support such as MgO or Al2O3 to increase the surface area for higher yield of the catalytic reaction of the carbon feedstock with the metal particles. One issue in this synthesis route is the removal of the catalyst support via an acid treatment, which sometimes could destroy the original structure of the carbon nanotubes. However, alternative catalyst supports that are soluble in water have proven effective for nanotube growth.
If a plasma is generated by the application of a strong electric field during the growth process (plasma enhanced chemical vapor deposition), then the nanotube growth will follow the direction of the electric field. By adjusting the geometry of the reactor it is possible to synthesize vertically aligned carbon nanotubes (i.e., perpendicular to the substrate), a morphology that has been of interest to researchers interested in the electron emission from nanotubes. Without the plasma, the resulting nanotubes are often randomly oriented. Under certain reaction conditions, even in the absence of a plasma, closely spaced nanotubes will maintain a vertical growth direction resulting in a dense array of tubes resembling a carpet or forest.
Of the various means for nanotube synthesis, CVD shows the most promise for industrial-scale deposition, because of its price/unit ratio, and because CVD is capable of growing nanotubes directly on a desired substrate, whereas the nanotubes must be collected in the other growth techniques. The growth sites are controllable by careful deposition of the catalyst. In 2007, a team from Meijo University demonstrated a high-efficiency CVD technique for growing carbon nanotubes from camphor. Researchers at Rice University, until recently led by the late Richard Smalley, have concentrated upon finding methods to produce large, pure amounts of particular types of nanotubes. Their approach grows long fibers from many small seeds cut from a single nanotube; all of the resulting fibers were found to be of the same diameter as the original nanotube and are expected to be of the same type as the original nanotube.
Super-growth CVD
Super-growth CVD (water-assisted chemical vapor deposition) process was developed by Kenji Hata, Sumio Iijima and co-workers at AIST, Japan. In this process, the activity and lifetime of the catalyst are enhanced by addition of water into the CVD reactor. Dense millimeter-tall nanotube "forests", aligned normal to the substrate, were produced. The forests growth rate could be expressed, as
In this equation, β is the initial growth rate and is the characteristic catalyst lifetime.
Their specific surface exceeds 1,000 m2/g (capped) or 2,200 m2/g (uncapped), surpassing the value of 400–1,000 m2/g for HiPco samples. The synthesis efficiency is about 100 times higher than for the laser ablation method. The time required to make SWNT forests of the height of 2.5 mm by this method was 10 minutes in 2004. Those SWNT forests can be easily separated from the catalyst, yielding clean SWNT material (purity >99.98%) without further purification. For comparison, the as-grown HiPco CNTs contain about 5–35% of metal impurities; it is therefore purified through dispersion and centrifugation that damages the nanotubes. The super-growth process avoids this problem. Patterned highly organized single-walled nanotube structures were successfully fabricated using the super-growth technique.
The mass density of super-growth CNTs is about 0.037 g/cm3. It is much lower than that of conventional CNT powders (~1.34 g/cm3), probably because the latter contain metals and amorphous carbon.
The super-growth method is basically a variation of CVD. Therefore, it is possible to grow material containing SWNT, DWNTs and MWNTs, and to alter their ratios by tuning the growth conditions. Their ratios change by the thinness of the catalyst. Many MWNTs are included so that the diameter of the tube is wide.
The vertically aligned nanotube forests originate from a "zipping effect" when they are immersed in a solvent and dried. The zipping effect is caused by the surface tension of the solvent and the van der Waals forces between the carbon nanotubes. It aligns the nanotubes into a dense material, which can be formed in various shapes, such as sheets and bars, by applying weak compression during the process. Densification increases the Vickers hardness by about 70 times and density is 0.55 g/cm3. The packed carbon nanotubes are more than 1 mm long and have a carbon purity of 99.9% or higher; they also retain the desirable alignment properties of the nanotubes forest.
Natural, incidental, and controlled flame environments
Fullerenes and carbon nanotubes are not necessarily products of high-tech laboratories; they are commonly formed in such mundane places as ordinary flames, produced by burning methane, ethylene, and benzene, and they have been found in soot from both indoor and outdoor air. However, these naturally occurring varieties can be highly irregular in size and quality because the environment in which they are produced is often highly uncontrolled. Thus, although they can be used in some applications, they can lack in the high degree of uniformity necessary to satisfy the many needs of both research and industry. Recent efforts have focused on producing more uniform carbon nanotubes in controlled flame environments. Such methods have promise for large-scale, low-cost nanotube synthesis based on theoretical models, though they must compete with rapidly developing large scale CVD production.
Removal of catalysts
Nanoscale metal catalysts are important ingredients for fixed- and fluidized-bed CVD synthesis of CNTs. They allow increasing the growth efficiency of CNTs and may give control over their structure and chirality. During synthesis, catalysts can convert carbon precursors into tubular carbon structures but can also form encapsulating carbon overcoats. Together with metal oxide supports they may therefore attach to or become incorporated into the CNT product. The presence of metal impurities can be problematic for many applications. Especially catalyst metals like nickel, cobalt or yttrium may be of toxicological concern. While un-encapsulated catalyst metals may be readily removable by acid washing, encapsulated ones require oxidative treatment for opening their carbon shell. The effective removal of catalysts, especially of encapsulated ones, while preserving the CNT structure is a challenge and has been addressed in many studies. A new approach to break carbonaceaous catalyst encapsulations is based on rapid thermal annealing.
Many electronic applications of carbon nanotubes crucially rely on techniques of selectively producing either semiconducting or metallic CNTs, preferably of a certain chirality. Several methods of separating semiconducting and metallic CNTs are known, but most of them are not yet suitable for large-scale technological processes. The most efficient method relies on density-gradient ultracentrifugation, which separates surfactant-wrapped nanotubes by the minute difference in their density. This density difference often translates into difference in the nanotube diameter and (semi)conducting properties. Another method of separation uses a sequence of freezing, thawing, and compression of SWNTs embedded in agarose gel. This process results in a solution containing 70% metallic SWNTs and leaves a gel containing 95% semiconducting SWNTs. The diluted solutions separated by this method show various colors. Moreover, SWNTs can be separated by the column chromatography method. Yield is 95% in semiconductor type SWNT and 90% in metallic type SWNT.
In addition to separation of semiconducting and metallic SWNTs, it is possible to sort SWNTs by length, diameter, and chirality. The highest resolution length sorting, with length variation of <10%, has thus far been achieved by size exclusion chromatography (SEC) of DNA-dispersed carbon nanotubes (DNA-SWNT). SWNT diameter separation has been achieved by density-gradient ultracentrifugation (DGU) using surfactant-dispersed SWNTs and by ion-exchange chromatography (IEC) for DNA-SWNT. Purification of individual chiralities has also been demonstrated with IEC of DNA-SWNT: specific short DNA oligomers can be used to isolate individual SWNT chiralities. Thus far, 12 chiralities have been isolated at purities ranging from 70% for (8,3) and (9,5) SWNTs to 90% for (6,5), (7,5) and (10,5)SWNTs. There have been successful efforts to integrate these purified nanotubes into devices, e. g. FETs.
An alternative to separation is development of a selective growth of semiconducting or metallic CNTs. Recently, a new CVD recipe that involves a combination of ethanol and methanol gases and quartz substrates resulting in horizontally aligned arrays of 95–98% semiconducting nanotubes was announced.
Nanotubes are usually grown on nanoparticles of magnetic metal (Fe, Co), which facilitates production of electronic (spintronic) devices. In particular, control of current through a field-effect transistor by magnetic field has been demonstrated in such a single-tube nanostructure.
Current applications
Current use and application of nanotubes has mostly been limited to the use of bulk nanotubes, which is a mass of rather unorganized fragments of nanotubes. Bulk nanotube materials may never achieve a tensile strength similar to that of individual tubes, but such composites may, nevertheless, yield strengths sufficient for many applications. Bulk carbon nanotubes have already been used as composite fibers in polymers to improve the mechanical, thermal and electrical properties of the bulk product.
- Easton-Bell Sports, Inc. have been in partnership with Zyvex Performance Materials, using CNT technology in a number of their bicycle components—including flat and riser handlebars, cranks, forks, seatposts, stems and aero bars.
- Zyvex Technologies has also built a 54' maritime vessel, the Piranha Unmanned Surface Vessel, as a technology demonstrator for what is possible using CNT technology. CNTs help improve the structural performance of the vessel, resulting in a lightweight 8,000 lb boat that can carry a payload of 15,000 lb over a range of 2,500 miles.
- Amroy Europe Oy manufactures Hybtonite carbon nanoepoxy resins where carbon nanotubes have been chemically activated to bond to epoxy, resulting in a composite material that is 20% to 30% stronger than other composite materials. It has been used for wind turbines, marine paints and variety of sports gear such as skis, ice hockey sticks, baseball bats, hunting arrows, and surfboards.
Other current applications include:
- tips for atomic force microscope probes
- in tissue engineering, carbon nanotubes can act as scaffolding for bone growth
Potential applications
The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be is 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it.
Because of the carbon nanotube's superior mechanical properties, many structures have been proposed ranging from everyday items like clothes and sports gear to combat jackets and space elevators. However, the space elevator will require further efforts in refining carbon nanotube technology, as the practical tensile strength of carbon nanotubes can still be greatly improved.
For perspective, outstanding breakthroughs have already been made. Pioneering work led by Ray H. Baughman at the NanoTech Institute has shown that single and multi-walled nanotubes can produce materials with toughness unmatched in the man-made and natural worlds.
Carbon nanotubes are also a promising material as building blocks in bio-mimetic hierarchical composite materials given their exceptional mechanical properties (~1 TPa in modulus, and ~100 GPa in strength). Initial attempts to incorporate CNTs into hierarchical structures led to mechanical properties that were significantly lower than these achievable limits. Windle et al. have used an in situ chemical vapor deposition (CVD) spinning method to produce continuous CNT yarns from CVD grown CNT aerogels. With this technology, they fabricated CNT yarns with strengths as high as ~9 GPa at small gage lengths of ~1 mm, however, defects resulted in a reduction of specific strength to ~1 GPa at 20 mm gage length. Espinosa et al. developed high performance DWNT-polymer composite yarns by twisting and stretching ribbons of randomly oriented bundles of DWNTs thinly coated with polymeric organic compounds. These DWNT-polymer yarns exhibited unusually high energy to failure of ~100 J·g−1 (comparable to one of the toughest natural materials – spider silk), and strength as high as ~1.4 GPa. Effort is ongoing to produce CNT composites that incorporate tougher matrix materials, such as Kevlar, to further improve on the mechanical properties toward those of individual CNTs.
Because of the high mechanical strength of carbon nanotubes, research is being made into weaving them into clothes to create stab-proof and bulletproof clothing. The nanotubes would effectively stop the bullet from penetrating the body, although the bullet's kinetic energy would likely cause broken bones and internal bleeding.
Electrical circuits
Nanotube-based transistors, also known as carbon nanotube field-effect transistors (CNTFETs), have been made that operate at room temperature and that are capable of digital switching using a single electron. However, one major obstacle to realization of nanotubes has been the lack of technology for mass production. In 2001 IBM researchers demonstrated how metallic nanotubes can be destroyed, leaving semiconducting ones behind for use as transistors. Their process is called "constructive destruction," which includes the automatic destruction of defective nanotubes on the wafer. This process, however, only gives control over the electrical properties on a statistical scale.
The potential of carbon nanotubes was demonstrated in 2003 when room-temperature ballistic transistors with ohmic metal contacts and high-k gate dielectric were reported, showing 20–30x higher ON current than state-of-the-art Si MOSFETs. This presented an important advance in the field as CNT was shown to potentially outperform Si. At the time, a major challenge was ohmic metal contact formation. In this regard, palladium, which is a high-work function metal was shown to exhibit Schottky barrier-free contacts to semiconducting nanotubes with diameters >1.7 nm.
The first nanotube integrated memory circuit was made in 2004. One of the main challenges has been regulating the conductivity of nanotubes. Depending on subtle surface features a nanotube may act as a plain conductor or as a semiconductor. A fully automated method has however been developed to remove non-semiconductor tubes.
Another way to make carbon nanotube transistors has been to use random networks of them. By doing so one averages all of their electrical differences and one can produce devices in large scale at the wafer level. This approach was first patented by Nanomix Inc. (date of original application June 2002). It was first published in the academic literature by the United States Naval Research Laboratory in 2003 through independent research work. This approach also enabled Nanomix to make the first transistor on a flexible and transparent substrate.
Large structures of carbon nanotubes can be used for thermal management of electronic circuits. An approximately 1 mm–thick carbon nanotube layer was used as a special material to fabricate coolers, this material has very low density, ~20 times lower weight than a similar copper structure, while the cooling properties are similar for the two materials.
Overall, incorporating carbon nanotubes as transistors into logic-gate circuits with densities comparable to modern CMOS technology has not yet been demonstrated.
Electrical cables and wires
Wires for carrying electrical current may be fabricated from pure nanotubes and nanotube-polymer composites. Recently small wires have been fabricated with specific conductivity exceeding copper and aluminum; these cables are the highest conductivity carbon nanotube and also highest conductivity non-metal cables.
Paper batteries
A paper battery is a battery engineered to use a paper-thin sheet of cellulose (which is the major constituent of regular paper, among other things) infused with aligned carbon nanotubes. The nanotubes act as electrodes; allowing the storage devices to conduct electricity. The battery, which functions as both a lithium-ion battery and a supercapacitor, can provide a long, steady power output comparable to a conventional battery, as well as a supercapacitor’s quick burst of high power—and while a conventional battery contains a number of separate components, the paper battery integrates all of the battery components in a single structure, making it more energy efficient.
Solar cells
One of the promising applications of single-walled carbon nanotubes (SWNTs) is their use in solar panels, due to their strong UV/Vis-NIR absorption characteristics. Research has shown that they can provide a sizeable increase in efficiency, even at their current unoptimized state. Solar cells developed at the New Jersey Institute of Technology use a carbon nanotube complex, formed by a mixture of carbon nanotubes and carbon buckyballs (known as fullerenes) to form snake-like structures. Buckyballs trap electrons, but they can't make electrons flow. Add sunlight to excite the polymers, and the buckyballs will grab the electrons. Nanotubes, behaving like copper wires, will then be able to make the electrons or current flow.
Additional research has been conducted on creating SWNT hybrid solar panels to increase the efficiency further. These hybrids are created by combining SWNT's with photexcitable electron donors to increase the number of electrons generated. It has been found that the interaction between the photoexcited porphrin and SWNT generates electro-hole pairs at the SWNT surfaces. This phenomenon has been observed experimentally, and contributes practically to an increase in efficiency up to 8.5%.
Hydrogen storage
In addition to being able to store electrical energy, there has been some research in using carbon nanotubes to store hydrogen to be used as a fuel source. By taking advantage of the capillary effects of the small carbon nanotubes, it is possible to condense gases in high density inside single-walled nanotubes. This allows for gases, most notably hydrogen (H2), to be stored at high densities without being condensed into a liquid. Potentially, this storage method could be used on vehicles in place of gas fuel tanks for a hydrogen-powered car. A current issue regarding hydrogen-powered vehicles is the onboard storage of the fuel. Current storage methods involve cooling and condensing the H2 gas to a liquid state for storage which causes a loss of potential energy (25–45%) when compared to the energy associated with the gaseous state. Storage using SWNTs would allow one to keep the H2 in its gaseous state, thereby increasing the storage effciency. This method allows for a volume to energy ratio slightly smaller to that of current gas powered vehicles, allowing for a slightly lower but comparable range.
An area of controversy and frequent experimentation regarding the storage of hydrogen by adsorption in carbon nanotubes is the efficiency by which this process occurs. The effectiveness of hydrogen storage is integral to its use as a primary fuel source since hydrogen only contains about one fourth the energy per unit volume as gasoline.
Experimental capacity
One experiment sought to determine the amount of hydrogen stored in CNTs by utilizing elastic recoil detection analysis (ERDA). CNTs (primarily SWNTs) were synthesized via chemical vapor disposition (CVD) and subjected to a two-stage purification process including air oxidation and acid treatment, then formed into flat, uniform discs and exposed to pure, pressurized hydrogen at various temperatures. When the data was analyzed, it was found that the ability of CNTs to store hydrogen decreased as temperature increased. Moreover, the highest hydrogen concentration measured was ~0.18%; significantly lower than commercially viable hydrogen storage needs to be.
In another experiment, CNTs were synthesized via CVD and their structure was characterized using Raman spectroscopy. Utilizing microwave digestion, the samples were exposed to different acid concentrations and different temperatures for various amounts of time in an attempt to find the optimum purification method for SWNTs of the diameter determined earlier. The purified samples were then exposed to hydrogen gas at various high pressures, and their adsorption by weight percent was plotted. The data showed that hydrogen adsorption levels of up to 3.7% are possible with a very pure sample and under the proper conditions. It is thought that microwave digestion helps improve the hydrogen adsorption capacity of the CNTs by opening up the ends, allowing access to the inner cavities of the nanotubes.
Limitations on efficient hydrogen adsorption
The biggest obstacle to efficient hydrogen storage using CNTs is the purity of the nanotubes. To achieve maximum hydrogen adsorption, there must be minimum graphene, amorphous carbon, and metallic deposits in the nanotube sample. Current methods of CNT synthesis require a purification step. However, even with pure nanotubes, the adsorption capacity is only maximized under high pressures, which are undesirable in commercial fuel tanks.
MIT Laboratory for Electromagnetic and Electronic Systems uses nanotubes to improve ultracapacitors. The activated charcoal used in conventional ultracapacitors has many small hollow spaces of various size, which create together a large surface to store electric charge. But as charge is quantized into elementary charges, i.e. electrons, and each such elementary charge needs a minimum space, a significant fraction of the electrode surface is not available for storage because the hollow spaces are not compatible with the charge's requirements. With a nanotube electrode the spaces may be tailored to size—few too large or too small—and consequently the capacity should be increased considerably.
Radar absorption
Radars work in the microwave frequency range, which can be absorbed by MWNTs. Applying the MWNTs to the aircraft would cause the radar to be absorbed and therefore seem to have a smaller signature. One such application could be to paint the nanotubes onto the plane. Recently there has been some work done at the University of Michigan regarding carbon nanotubes usefulness as stealth technology on aircraft. It has been found that in addition to the radar absorbing properties, the nanotubes neither reflect nor scatter visible light, making it essentially invisible at night, much like painting current stealth aircraft black except much more effective. Current limitations in manufacturing, however, mean that current production of nanotube-coated aircraft is not possible. One theory to overcome these current limitations is to cover small particles with the nanotubes and suspend the nanotube-covered particles in a medium such as paint, which can then be applied to a surface, like a stealth aircraft.
In the Kanzius cancer therapy, single-walled carbon nanotubes are inserted around cancerous cells, then excited with radio waves, which causes them to heat up and kill the surrounding cells.
Researchers at Rice University, Radboud University Nijmegen Medical Centre and University of California, Riverside have shown that carbon nanotubes and their polymer nanocomposites are suitable scaffold materials for bone cell proliferation and bone formation.
The previous studies on the use of CNTs for textile functionalization were focused on fiber spinning for improving physical and mechanical properties. Recently a great deal of attention has been focused on coating CNTs on textile fabrics. Various methods have been employed for modifying fabrics using CNTs. Shim et al. produced intelligent e-textiles for Human Biomonitoring using a polyelectrolyte-based coating with CNTs. Additionally, Panhuis et al. dyed textile material by immersion in either a poly (2-methoxy aniline-5-sulfonic acid) PMAS polymer solution or PMAS-SWNT dispersion with enhanced conductivity and capacitance with a durable behavior. In another study, Hu and coworkers coated single-walled carbon nanotubes with a simple “dipping and drying” process for wearable electronics and energy storage applications. CNTs have an aligned nanotube structure and a negative surface charge. Therefore, they have similar structures to direct dyes, so the exhaustion method is applied for coating and absorbing CNTs on the fiber surface for preparing multifunctional fabric including antibacterial, electric conductive, flame retardant and electromagnetic absorbance properties.
Optical power detectors
A spray-on mixture of carbon nanotubes and ceramic demonstrates unprecedented ability to resist damage while absorbing laser light. Such coatings that absorb as the energy of high-powered lasers without breaking down are essential for optical power detectors that measure the output of such lasers. These are used, for example, in military equipment for defusing unexploded mines. The composite consists of multiwall carbon nanotubes and a ceramic made of silicon, carbon and nitrogen. Including boron boosts the breakdown temperature. The nanotubes and graphene-like carbon transmit heat well, while the oxidation-resistant ceramic boosts damage resistance. Creating the coating involves dispersing he nanotubes in toluene, to which a clear liquid polymer containing boron was added. The mixture was heated to 1,100 °C (2,010 °F). The result is crushed into a fine powder, dispersed again in toluene and sprayed in a thin coat on a copper surface. The coating absorbed 97.5 percent of the light from a far-infrared laser and tolerated 15 kilowatts per square centimeter for 10 seconds. Damage tolerance is about 50 percent higher than for similar coatings, e.g., nanotubes alone and carbon paint.
Other applications
Carbon nanotubes have been implemented in nanoelectromechanical systems, including mechanical memory elements (NRAM being developed by Nantero Inc.) and nanoscale electric motors (see Nanomotor or Nanotube nanomotor).
In May 2005, Nanomix Inc. placed on the market a hydrogen sensor that integrated carbon nanotubes on a silicon platform. Since then, Nanomix has been patenting many such sensor applications, such as in the field of carbon dioxide, nitrous oxide, glucose, DNA detection, etc.
Eikos Inc of Franklin, Massachusetts and Unidym Inc. of Silicon Valley, California are developing transparent, electrically conductive films of carbon nanotubes to replace indium tin oxide (ITO). Carbon nanotube films are substantially more mechanically robust than ITO films, making them ideal for high-reliability touchscreens and flexible displays. Printable water-based inks of carbon nanotubes are desired to enable the production of these films to replace ITO. Nanotube films show promise for use in displays for computers, cell phones, PDAs, and ATMs.
A nanoradio, a radio receiver consisting of a single nanotube, was demonstrated in 2007. In 2008 it was shown that a sheet of nanotubes can operate as a loudspeaker if an alternating current is applied. The sound is not produced through vibration but thermoacoustically.
A flywheel made of carbon nanotubes could be spun at extremely high velocity on a floating magnetic axis in a vacuum, and potentially store energy at a density approaching that of conventional fossil fuels. Since energy can be added to and removed from flywheels very efficiently in the form of electricity, this might offer a way of storing electricity, making the electrical grid more efficient and variable power suppliers (like wind turbines) more useful in meeting energy needs. The practicality of this depends heavily upon the cost of making massive, unbroken nanotube structures, and their failure rate under stress.
Carbon nanotube springs have the potential to indefinitely store elastic potential energy at ten times the density of lithium-ion batteries with flexible charge and discharge rates and extremely high cycling durability.
Ultra-short SWNTs (US-tubes) have been used as nanoscaled capsules for delivering MRI contrast agents in vivo.
Carbon nanotubes provide a certain potential for metal-free catalysis of inorganic and organic reactions. For instance, oxygen groups attached to the surface of carbon nanotubes have the potential to catalyze oxidative dehydrogenations or selective oxidations. Nitrogen-doped carbon nanotubes may replace platinum catalysts used to reduce oxygen in fuel cells. A forest of vertically aligned nanotubes can reduce oxygen in alkaline solution more effectively than platinum, which has been used in such applications since the 1960s. Here, the nanotubes have the added benefit of not being subject to carbon monoxide poisoning.
A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the interesting and often-misstated origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometer-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991.
In 1952 L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometer diameter tubes made of carbon in the Soviet Journal of Physical Chemistry. This discovery was largely unnoticed, as the article was published in the Russian language, and Western scientists' access to Soviet press was limited during the Cold War. It is likely that carbon nanotubes were produced before this date, but the invention of the transmission electron microscope (TEM) allowed direct visualization of these structures.
Carbon nanotubes have been produced and observed under a variety of conditions prior to 1991. A paper by Oberlin, Endo, and Koyama published in 1976 clearly showed hollow carbon fibers with nanometer-scale diameters using a vapor-growth technique. Additionally, the authors show a TEM image of a nanotube consisting of a single wall of graphene. Later, Endo has referred to this image as a single-walled nanotube.
In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.
In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytical disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their “carbon multi-layer tubular crystals” were formed by rolling graphene layers into cylinders. They speculated that by rolling graphene layers into a cylinder, many different arrangements of graphene hexagonal nets are possible. They suggested two possibilities of such arrangements: circular arrangement (armchair nanotube) and a spiral, helical arrangement (chiral tube).
In 1987, Howard G. Tennett of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core...."
Iijima's discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods in 1991 and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, then they would exhibit remarkable conducting properties helped create the initial buzz that is now associated with carbon nanotubes. Nanotube research accelerated greatly following the independent discoveries by Bethune at IBM and Iijima at NEC of single-walled carbon nanotubes and methods to specifically produce them by adding transition-metal catalysts to the carbon in an arc discharge. The arc discharge technique was well-known to produce the famed Buckminster fullerene on a preparative scale, and these results appeared to extend the run of accidental discoveries relating to fullerenes. The original observation of fullerenes in mass spectrometry was not anticipated, and the first mass-production technique by Krätschmer and Huffman was used for several years before realizing that it produced fullerenes.
The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole.
See also
- Boron nitride nanotube
- Carbide-derived carbon
- Carbon nanocone
- Carbon nanofibers
- Carbon nanoparticles
- Carbon nanotube chemistry
- Colossal carbon tube
- Graphene oxide paper
- List of software for nanostructures modeling
- Molecular modelling
- Ninithi (nanotube modelling software)
- Organic electronics
- Selective chemistry of single-walled nanotubes
- Silicon nanotubes
- Wang, X.; Li, Qunqing; Xie, Jing; Jin, Zhong; Wang, Jinyong; Li, Yan; Jiang, Kaili; Fan, Shoushan (2009). "Fabrication of Ultralong and Electrically Uniform Single-Walled Carbon Nanotubes on Clean Substrates". Nano Letters 9 (9): 3137–3141. doi:10.1021/nl901260b. PMID 19650638.
- Gullapalli, S.; Wong, M.S. (2011). "Nanotechnology: A Guide to Nano-Objects". Chemical Engineering Progress 107 (5): 28–32.
- Mintmire, J.W.; Dunlap, B.I.; White, C.T. (1992). "Are Fullerene Tubules Metallic?". Phys. Rev. Lett. 68 (5): 631–634. doi:10.1103/PhysRevLett.68.631. PMID 10045950.
- Dekker, C. (1999). "Carbon nanotubes as molecular quantum wires". Physics Today 52 (5): 22–28. doi:10.1063/1.882658.
- Martel, R.; Derycke, V.; Lavoie, C.; Appenzeller, J.; Chan, K.; Tersoff, J.; Avouris, Ph. (2001). "Ambipolar Electrical Transport in Semiconducting Single-Wall Carbon Nanotubes". Phys. Rev. Lett. 87 (25): 256805. doi:10.1103/PhysRevLett.87.256805. PMID 11736597.
- Flahaut, E.; Bacsa, Revathi; Peigney, Alain; Laurent, Christophe (2003). "Gram-Scale CCVD Synthesis of Double-Walled Carbon Nanotubes". Chemical Communications 12 (12): 1442–1443. doi:10.1039/b301514a. PMID 12841282.
- Cumings, J.; Zettl, A. (2000). "Low-Friction Nanoscale Linear Bearing Realized from Multiwall Carbon Nanotubes". Science 289 (5479): 602–604. doi:10.1126/science.289.5479.602. PMID 10915618.
- Treacy, M.M.J.; Ebbesen, T.W.; Gibson, J.M. (1996). "Exceptionally high Young's modulus observed for individual carbon nanotubes". Nature 381 (6584): 678–680. doi:10.1038/381678a0.
- Zavalniuk, V.; Marchenko, S. (2011). "Theoretical analysis of telescopic oscillations in multi-walled carbon nanotubes". Low Temperature Physics 37 (4): 337. arXiv:0903.2461. doi:10.1063/1.3592692.
- Liu, L.; Guo, G.; Jayanthi, C.; Wu, S. (2002). "Colossal Paramagnetic Moments in Metallic Carbon Nanotori". Phys. Rev. Lett. 88 (21): 217206. doi:10.1103/PhysRevLett.88.217206. PMID 12059501.
- Huhtala, M.; Kuronen, A.; Kaski, K. (2002). "Carbon nanotube structures: Molecular dynamics simulation at realistic limit". Computer Physics Communications 146: 30. doi:10.1016/S0010-4655(02)00432-0.
- Yu, Kehan; Ganhua Lu, Zheng Bo, Shun Mao, and Junhong Chen (2011). "Carbon Nanotube with Chemically Bonded Graphene Leaves for Electronic and Optoelectronic Applications". J. Phys. Chem. Lett. 13 2 (13): 1556–1562. doi:10.1021/jz200641c.
- Stoner, Brian R.; Akshay S. Raut, Billyde Brown, Charles B. Parker, and Jeffrey T. Glass (2011). "Graphenated carbon nanotubes for enhanced electrochemical double layer capacitor performance". Appl. Phys. Lett. 18 99 (18): 183104. doi:10.1063/1.3657514.
- Hsu, Hsin-Cheng; Chen-Hao Wang, S.K. Nataraj, Hsin-Chih Huang, He-Yun Du, Sun-Tang Chang, Li-Chyong Chen, Kuei-Hsien Chen (2012). "Stand-up structure of graphene-like carbon nanowalls on CNT directly grown on polyacrylonitrile-based carbon fiber paper as supercapacitor". Diamond and Related Materials 25: 176–9. doi:10.1016/j.diamond.2012.02.020.
- Parker, Charles B.; Akshay S. Raut, Billyde Brown, Brian R. Stoner, and Jeffrey T. Glass (2012). "Three-dimensional arrays of graphenated carbon nanotubes". J. Mater. Res. 7 27 (7): 1046–53. doi:10.1557/jmr.2012.43.
- Cui, Hong-tao; O. Zhou, and B. R. Stoner (2000). "Deposition of aligned bamboo-like carbon nanotubes via microwave plasma enhanced chemical vapor deposition". J. Appl. Phys. 88 (10): 6072–4. doi:10.1063/1.1320024.
- Stoner, Brian R.; Jeffrey T. Glass (2012). "Carbon nanostructures: a morphological classification for charge density optimization". Diamond and Related Materials 23: 130–4. doi:10.1016/j.diamond.2012.01.034.
- J. Kouvetakis, M. Todd, B. Wilkens, A. Bandari, N. Cave, Novel synthetic routes to carbon–nitrogen thin films, Chem. Mater. 6 (1994) 811–814.
- Journal of Physics and Chemistry of Solids 71 (2010) 134–139
- L.-W. Yin, Y. Bando, M.-S. Li, Y.-X. Liu, Y.-X. Qi, Unique single-crystalline beta carbon nitride nanorods, Adv. Mater. 15 (2003) 1840–1844.
- T. Oku, M. Kawaguchi, Microstructure analysis of CN-based nanocage materials by high-resolution electron microscopy, Diamond Relat. Mater. 9 (2000) 906–910.
- Q.X. Guo, Y. Xie, X.J. Wang, S.Y. Zhang, T. Hou, S.C. Lv, Synthesis of carbon nitride nanotubes with the C3N4stoichiometry via a benzene-thermal process at low temperatures, Chem. Commun. 1 (2004) 26–27.
- Y. Zhong et al. / Journal of Physics and Chemistry of Solids 71 (2010) 134–139
- Nitrogen-Doped Multiwall Carbon Nanotubes for Lithium Storage with Extremely High Capacity Weon Ho Shin, Hyung Mo Jeong, Byung Gon Kim, Jeung Ku Kang, and Jang Wook Choi Nano Letters 2012 12 (5), 2283-2288
- "Doped nanotubes boost lithium battery power three-fold."
- Smith, Brian W.; Monthioux, Marc; Luzzi, David E. (1998). "Encapsulated C-60 in carbon nanotubes". Nature 396: 323–324. doi:10.1038/24521.
- Smith, B.W.; Luzzi, D.E. (2000). "Formation mechanism of fullerene peapods and coaxial tubes: a path to large scale synthesis". Chem. Phys. Lett. 321: 169–174. doi:10.1016/S0009-2614(00)00307-9.
- Su, H.; Goddard, W.A.; Zhao, Y. (2006). "Dynamic friction force in a carbon peapod oscillator". Nanotechnology 17 (22): 5691–5695. arXiv:cond-mat/0611671. doi:10.1088/0957-4484/17/22/026.
- Wang, M.; Li, C.M. (2010). "An oscillator in a carbon peapod controllable by an external electric field: A molecular dynamics study". Nanotechnology 21 (3): 035704. doi:10.1088/0957-4484/21/3/035704.
- Liu, Q.; Ren, Wencai; Chen, Zhi-Gang; Yin, Lichang; Li, Feng; Cong, Hongtao; Cheng, Hui-Ming (2009). "Semiconducting properties of cup-stacked carbon nanotubes". Carbon 47 (3): 731–736. doi:10.1016/j.carbon.2008.11.005.
- "A Better Way to Make Nanotubes". Lawrence Berkeley National Laboratory. January 5, 2009.
- Bertozzi, C. (2009). "Carbon Nanohoops: Shortest Segment of a Carbon Nanotube Synthesized". Lawrence Berkeley National Laboratory.
- "Spotlight on Nagoya:A centre of chemistry excellence". Nature. October 7, 2009.
- Zhao, X.; Liu, Y.; Inoue, S.; Suzuki, T.; Jones, R.; Ando, Y. (2004). "Smallest Carbon Nanotube is 3 Å in Diameter". Phys. Rev. Lett. 92 (12): 125502. doi:10.1103/PhysRevLett.92.125502. PMID 15089683.
- Hayashi, Takuya; Kim, Yoong Ahm; Matoba, Toshiharu; Esaka, Masaya; Nishimura, Kunio; Tsukada, Takayuki; Endo, Morinobu; Dresselhaus, Mildred S. (2003). "Smallest Freestanding Single-Walled Carbon Nanotube". Nano Letters 3 (7): 887–889. doi:10.1021/nl034080r.
- Guan, L.; Suenaga, K.; Iijima, S. (2008). "Smallest Carbon Nanotube Assigned with Atomic Resolution Accuracy". Nano Letters 8 (2): 459–462. doi:10.1021/nl072396j. PMID 18186659.
- Yu, M.-F.; Lourie, O; Dyer, MJ; Moloni, K; Kelly, TF; Ruoff, RS (2000). "Strength and Breaking Mechanism of Multiwalled Carbon Nanotubes Under Tensile Load". Science 287 (5453): 637–640. doi:10.1126/science.287.5453.637. PMID 10649994.
- Peng, B.; Locascio, Mark; Zapol, Peter; Li, Shuyou; Mielke, Steven L.; Schatz, George C.; Espinosa, Horacio D. (2008). "Measurements of near-ultimate strength for multiwalled carbon nanotubes and irradiation-induced crosslinking improvements". Nature Nanotechnology 3 (10): 626–631. doi:10.1038/nnano.2008.211.
- Collins, P.G. (2000). "Nanotubes for Electronics". Scientific American: 67–69.
- Filleter, T.; Bernal, R.; Li, S.; Espinosa, H.D. (2011). "Ultrahigh Strength and Stiffness in Cross-Linked Hierarchical Carbon Nanotube Bundles". Advanced Materials 23 (25): 2855. doi:10.1002/adma.201100547.
- Jensen, K.; Mickelson, W.; Kis, A.; Zettl, A. (2007). "Buckling and kinking force measurements on individual multiwalled carbon nanotubes". Physical Review B 76 (19): 195436. doi:10.1103/PhysRevB.76.195436.
- Belluci, S. (2005). "Carbon nanotubes: Physics and applications". Physica Status Solidi C 2 (1): 34–47. doi:10.1002/pssc.200460105.
- Chae, H.G.; Kumar, S. (2006). "Rigid Rod Polymeric Fibers". Journal of Applied Polymer Science 100 (1): 791–802. doi:10.1002/app.22680.
- Meo, M.; Rossi, M. (2006). "Prediction of Young's modulus of single wall carbon nanotubes by molecular-mechanics-based finite element modelling". Composites Science and Technology 66 (11–12): 1597–1605. doi:10.1016/j.compscitech.2005.11.015.
- Sinnott, S.B.; Andrews, R. (2001). "Carbon Nanotubes: Synthesis, Properties, and Applications". Critical Reviews in Solid State and Materials Sciences 26 (3): 145–249. doi:10.1080/20014091104189.
- Demczyk, B.G.; Wang, Y.M; Cumings, J; Hetman, M; Han, W; Zettl, A; Ritchie, R.O (2002). "Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes". Materials Science and Engineering A 334 (1–2): 173–178. doi:10.1016/S0921-5093(01)01807-X.
- "Properties of Stainless Steel". Australian Stainless Steel Development Association.
- "Stainless Steel – 17-7PH (Fe/Cr17/Ni 7) Material Information".
- Wagner, H.D. (2002). "Reinforcement". Encyclopedia of Polymer Science and Technology. John Wiley & Sons. doi:10.1002/0471440264.pst317.
- Ruoff, R.S.; Tersoff, J.; Lorents, Donald C.; Subramoney, Shekhar; Chan, Bryan (1993). "Radial deformation of carbon nanotubes by van der Waals forces". Nature 364 (6437): 514. doi:10.1038/364514a0.
- Palaci, I.; Fedrigo, S.; Brune, H.; Klinke, C.; Chen, M.; Riedo, E. (2005). "Radial Elasticity of Multiwalled Carbon Nanotubes". Phys. Rev. Lett. 94 (17): 175502. arXiv:1201.5501. doi:10.1103/PhysRevLett.94.175502.
- Yu, M.-F.; Kowalewski, Tomasz; Ruoff, Rodney (2000). "Investigation of the Radial Deformability of Individual Carbon Nanotubes under Controlled Indentation Force". Phys. Rev. Lett. 85 (7): 1456–1459. doi:10.1103/PhysRevLett.85.1456. PMID 10970528.
- Yang, Y.H. (2011). "Radial elasticity of single-walled carbon nanotube measured by atomic force microscopy". Applied Physics Letters 98: 041901. doi:10.1063/ApplPhysLett.98.041901.
- Popov, M.; Kyotani, M.; Nemanich, R.; Koga, Y. (2002). "Superhard phase composed of single-wall carbon nanotubes". Phys. Rev. B 65 (3): 033408. doi:10.1103/PhysRevB.65.033408.
- Sanders, R. (March 23, 2003). "Physicists build world's smallest motor using nanotubes and etched silicon" (Press release). UC Berkeley.
- Lu, X.; Chen, Z. (2005). "Curved Pi-Conjugation, Aromaticity, and the Related Chemistry of Small Fullerenes (C60) and Single-Walled Carbon Nanotubes". Chemical Reviews 105 (10): 3643–3696. doi:10.1021/cr030093d. PMID 16218563.
- Hong, Seunghun; Myung, S (2007). "Nanotube Electronics: A flexible approach to mobility". Nature Nanotechnology 2 (4): 207–208. doi:10.1038/nnano.2007.89. PMID 18654263.
- J.-C. Charlier, X. Blase, and S. Roche, "Electronic and transport properties of nanotubes," Rev. Mod. Phys. 79, 677-732 (2007), doi:10.1103/RevModPhys.79.677
- Tang, Z. K.; Zhang, L; Wang, N; Zhang, XX; Wen, GH; Li, GD; Wang, JN; Chan, CT et al. (2001). "Superconductivity in 4 Angstrom Single-Walled Carbon Nanotubes". Science 292 (5526): 2462–5. doi:10.1126/science.1060470. PMID 11431560.
- Takesue, I.; Haruyama, J.; Kobayashi, N.; Chiashi, S.; Maruyama, S.; Sugai, T.; Shinohara, H. (2006). "Superconductivity in Entirely End-Bonded Multiwalled Carbon Nanotubes". Phys. Rev. Lett. 96 (5): 057001. arXiv:cond-mat/0509466. doi:10.1103/PhysRevLett.96.057001. PMID 16486971.
- Lortz, R.; Zhang, Q; Shi, W; Ye, J. T.; Qiu, C. Y.; Wang, Z.; He, H. T.; Sheng, P; Qian, T. Z.; Tang, Z. K.; Wang, N.; Zhang, X. X.; Wang, J; Chan, C. T. (2009). "Superconducting characteristics of 4-A carbon nanotube–zeolite composite". Proceedings of the National Academy of Sciences 106 (18): 7299–7303. doi:10.1073/pnas.0813162106.
- M. Bockrath (2006). "Carbon nanotubes: The weakest link". Nature Physics 2 (3): 155. doi:10.1038/nphys252.
- Pop, Eric; Mann, David; Wang, Qian; Goodson, Kenneth; Dai, Hongjie (2005-12-22). "Thermal conductance of an individual single-wall carbon nanotube above room temperature". Nano Letters 6 (1): 96–100. arXiv:cond-mat/0512624. doi:10.1021/nl052145f. PMID 16402794.
- Sinha, Saion; Barjami, Saimir; Iannacchione, Germano; Schwab, Alexander; Muench, George (2005-06-05). "Off-axis thermal properties of carbon nanotube films". Journal of Nanoparticle Research 7 (6): 651–657. doi:10.1007/s11051-005-8382-9.
- Thostenson, Erik; Li, C; Chou, T (2005). "Nanocomposites in context". Composites Science and Technology 65 (3–4): 491–516. doi:10.1016/j.compscitech.2004.11.003.
- Carbon-Based Magnetism: An Overview of the Magnetism of Metal Free Carbon-based Compounds and Materials, Tatiana Makarova and Fernando Palacio (eds.), Elsevier, 2006
- Mingo, N.; Stewart, D. A.; Broido, D. A.; Srivastava, D. (2008). "Phonon transmission through defects in carbon nanotubes from first principles". Phys. Rev. B 77 (3): 033418. doi:10.1103/PhysRevB.77.033418.
- Kolosnjaj J, Szwarc H, Moussa F (2007). "Toxicity studies of carbon nanotubes". Adv Exp Med Biol. Advances in Experimental Medicine and Biology 620: 181–204. doi:10.1007/978-0-387-76713-0_14. ISBN 978-0-387-76712-3. PMID 18217344.
- Porter, Alexandra; Gass, Mhairi; Muller, Karin; Skepper, Jeremy N.; Midgley, Paul A.; Welland, Mark (2007). "Direct imaging of single-walled carbon nanotubes in cells". Nature Nanotechnology 2 (11): 713–7. doi:10.1038/nnano.2007.347. PMID 18654411.
- Zumwalde, Ralph and Laura Hodson (March 2009). "Approaches to Safe Nanotechnology: Managing the Health and Safety Concerns Associated with Engineered Nanomaterials". National Institute for Occupational Safety and Health. NIOSH (DHHS) Publication 2009-125.
- Lam CW, James JT, McCluskey R, Arepalli S, Hunter RL (2006). "A review of carbon nanotube toxicity and assessment of potential occupational and environmental health risks". Crit Rev Toxicol. 36 (3): 189–217. doi:10.1080/10408440600570233. PMID 16686422.
- Poland, CA; Duffin, Rodger; Kinloch, Ian; Maynard, Andrew; Wallace, William A. H.; Seaton, Anthony; Stone, Vicki; Brown, Simon et al. (2008). "Carbon nanotubes introduced into the abdominal cavity of mice show asbestos-like pathogenicity in a pilot study". Nature Nanotechnology 3 (7): 423–8. doi:10.1038/nnano.2008.111. PMID 18654567.
- Iijima, Sumio (1991). "Helical microtubules of graphitic carbon". Nature 354 (6348): 56–58. doi:10.1038/354056a0.
- Ebbesen, T. W.; Ajayan, P. M. (1992). "Large-scale synthesis of carbon nanotubes". Nature 358 (6383): 220–222. doi:10.1038/358220a0.
- Guo, Ting; Nikolaev, Pavel; Rinzler, Andrew G.; Tomanek, David; Colbert, Daniel T.; Smalley, Richard E. (1995). "Self-Assembly of Tubular Fullerenes". J. Phys. Chem. 99 (27): 10694–10697. doi:10.1021/j100027a002.
- Guo, Ting; Nikolaev, P; Thess, A; Colbert, D; Smalley, R (1995). "Catalytic growth of single-walled nanotubes by laser vaporization" (PDF). Chem. Phys. Lett. 243: 49–54. doi:10.1016/0009-2614(95)00825-O.
- Kim, K.S.; Cota-Sanchez, German; Kingston, Chris; Imris, M.; Simard, Benoît; Soucy, Gervais (2007). "Large-scale production of single-wall carbon nanotubes by induction thermal plasma". Journal of Phyics D: Applied Physics 40: 2375.
- Радушкевич, Л. В. (1952). "О Структуре Углерода, Образующегося При Термическом Разложении Окиси Углерода На Железном Контакте" (PDF). Журнал Физической Химии (in Russian) 26: 88–95.
- Walker Jr., P. L.; Rakszawski, J. F.; Imperial, G. R. (1959). "Carbon Formation from Carbon Monoxide-Hydrogen Mixtures over Iron Catalysts. I. Properties of Carbon Formed". J. Phys. Chem. 63 (2): 133. doi:10.1021/j150572a002.
- José-Yacamán, M.; Miki-Yoshida, M.; Rendón, L.; Santiesteban, J. G. (1993). "Catalytic growth of carbon microtubules with fullerene structure". Appl. Phys. Lett. 62 (6): 657. doi:10.1063/1.108857.
- Beckman, Wendy (2007-04-27). "UC Researchers Shatter World Records with Length of Carbon Nanotube Arrays". University of Cincinnati.
- Inami, Nobuhito; Ambri Mohamed, Mohd; Shikoh, Eiji; Fujiwara, Akihiko (2007). "Synthesis-condition dependence of carbon nanotube growth by alcohol catalytic chemical vapor deposition method" (PDF). Sci. Technol. Adv. Mater. 8 (4): 292. doi:10.1016/j.stam.2007.02.009. Retrieved 2013-03-27.
- N. Ishigami; Ago, H; Imamoto, K; Tsuji, M; Iakoubovskii, K; Minami, N (2008). "Crystal Plane Dependent Growth of Aligned Single-Walled Carbon Nanotubes on Sapphire". J. Am. Chem. Soc. 130 (30): 9918–9924. doi:10.1021/ja8024752. PMID 18597459.
- Naha, Sayangdev, and Ishwar K. Puri (2008). "A model for catalytic growth of carbon nanotubes". Journal of Physics D: Applied Physics 41: 065304. doi:10.1088/0022-3727/41/6/065304.
- Banerjee, Soumik, Naha, Sayangdev, and Ishwar K. Puri (2008). "Molecular simulation of the carbon nanotube growth mode during catalytic synthesis". Applied Physics Letters 92: 233121. doi:10.1063/1.2945798.
- Pinilla, JL; Moliner, R; Suelves, I; Lazaro, M; Echegoyen, Y; Palacios, J (2007). "Production of hydrogen and carbon nanofibers by thermal decomposition of methane using metal catalysts in a fluidized bed reactor". International Journal of Hydrogen Energy 32 (18): 4821. doi:10.1016/j.ijhydene.2007.08.013.
- Muradov, N (2001). "Hydrogen via methane decomposition: an application for decarbonization of fossil fuels". International Journal of Hydrogen Energy 26 (11): 1165–1175. doi:10.1016/S0360-3199(01)00073-8.
- Eftekhari, A.; Jafarkhani, P; Moztarzadeh, F (2006). "High-yield synthesis of carbon nanotubes using a water-soluble catalyst support in catalytic chemical vapor deposition". Carbon 44 (7): 1343. doi:10.1016/j.carbon.2005.12.006.
- Ren, Z. F.; Huang, ZP; Xu, JW; Wang, JH; Bush, P; Siegal, MP; Provencio, PN (1998). "Synthesis of Large Arrays of Well-Aligned Carbon Nanotubes on Glass". Science 282 (5391): 1105–7. doi:10.1126/science.282.5391.1105. PMID 9804545.
- SEM images & TEM images of carbon nanotubes, aligned carbon nanotube arrays, and nanoparticles. Nano-lab.com. Retrieved on 2012-06-06.
- Neupane, Suman; Lastres, Mauricio; Chiarella, M; Li, W.Z.; Su, Q; Du, G.H. (2012). "Synthesis and field emission properties of vertically aligned carbon nanotube arrays on copper". Carbon 50 (7): 2641–50. doi:10.1016/j.carbon.2012.02.024.
- Kumar, Mukul; Ando, Yoshinori (2007). "Carbon Nanotubes from Camphor: An Environment-Friendly Nanotechnology". Journal of Physics: Conference Series 61: 643. doi:10.1088/1742-6596/61/1/129.
- Boyd, Jade (2006-11-17). "Rice chemists create, grow nanotube seeds". Rice University.
- Hata, K.; Futaba, DN; Mizuno, K; Namai, T; Yumura, M; Iijima, S (2004). "Water-Assisted Highly Efficient Synthesis of Impurity-Free Single-Walled Carbon Nanotubes". Science 306 (5700): 1362–1365. doi:10.1126/science.1104962. PMID 15550668.
- Futaba, Don; Hata, Kenji; Yamada, Takeo; Mizuno, Kohei; Yumura, Motoo; Iijima, Sumio (2005). "Kinetics of Water-Assisted Single-Walled Carbon Nanotube Synthesis Revealed by a Time-Evolution Analysis". Phys. Rev. Lett. 95 (5): 056104. doi:10.1103/PhysRevLett.95.056104.
- Hiraoka, Tatsuki; Izadi-Najafabadi, Ali; Yamada, Takeo; Futaba, Don N.; Yasuda, Satoshi; Tanaike, Osamu; Hatori, Hiroaki; Yumura, Motoo et al. (2009). "Compact and light supercapacitors from a surface-only solid by opened carbon nanotubes with 2,200 m2/g". Advanced Functional Materials 20 (3): 422–428. doi:10.1002/adfm.200901927.
- "Unidym product sheet SWNT".
- "Characteristic of Carbon nanotubes by super-growth method" (in japanese).
- K.Hata. "From Highly Efficient Impurity-Free CNT Synthesis to DWNT forests, CNTsolids and Super-Capacitors".
- Yamada, Takeo; Namai, Tatsunori; Hata, Kenji; Futaba, Don N.; Mizuno, Kohei; Fan, Jing; Yudasaka, Masako; Yumura, Motoo et al. (2006). "Size-selective growth of double-walled carbon nanotube forests from engineered iron catalysts". Nature Nanotechnology 1 (2): 131–136. doi:10.1038/nnano.2006.95. PMID 18654165.
- Futaba, Don N.; Hata, Kenji; Yamada, Takeo; Hiraoka, Tatsuki; Hayamizu, Yuhei; Kakudate, Yozo; Tanaike, Osamu; Hatori, Hiroaki et al. (2006). "Shape-engineerable and highly densely packed single-walled carbon nanotubes and their application as super-capacitor electrodes". Nature Materials 5 (12): 987–994. doi:10.1038/nmat1782. PMID 17128258.
- Singer, J.M. (1959). "Carbon formation in very rich hydrocarbon-air flames. I. Studies of chemical content, temperature, ionization and particulate matter". Seventh Symposium (International) on Combustion.
- Yuan, Liming; Saito, Kozo; Pan, Chunxu; Williams, F.A; Gordon, A.S (2001). "Nanotubes from methane flames". Chemical physics letters 340 (3–4): 237–241. doi:10.1016/S0009-2614(01)00435-3.
- Yuan, Liming; Saito, Kozo; Hu, Wenchong; Chen, Zhi (2001). "Ethylene flame synthesis of well-aligned multi-walled carbon nanotubes". Chemical physics letters 346: 23–28. doi:10.1016/S0009-2614(01)00959-9.
- Duan, H. M.; McKinnon, J. T. (1994). "Nanoclusters Produced in Flames". Journal of Physical Chemistry 98 (49): 12815–12818. doi:10.1021/j100100a001.
- Murr, L. E.; Bang, J.J.; Esquivel, E.V.; Guerrero, P.A.; Lopez, D.A. (2004). "Carbon nanotubes, nanocrystal forms, and complex nanoparticle aggregates in common fuel-gas combustion sources and the ambient air". Journal of Nanoparticle Research 6 (2/3): 241–251. doi:10.1023/B:NANO.0000034651.91325.40.
- Vander Wal, R.L. (2002). "Fe-catalyzed single-walled carbon nanotube synthesis within a flame environment". Combust. Flame 130: 37–47. doi:10.1016/S0010-2180(02)00360-7.
- Saveliev, A.V.; Merchan-Merchan, Wilson; Kennedy, Lawrence A. (2003). "Metal catalyzed synthesis of carbon nanostructures in an opposed flow methane oxygen flame". Combust. Flame 135: 27–33. doi:10.1016/S0010-2180(03)00142-1.
- Height, M.J.; Howard, Jack B.; Tester, Jefferson W.; Vander Sande, John B. (2004). "Flame synthesis of single-walled carbon nanotubes". Carbon 42 (11): 2295–2307. doi:10.1016/j.carbon.2004.05.010.
- Sen, S.; Puri, Ishwar K (2004). "Flame synthesis of carbon nanofibers and nanofibers composites containing encapsulated metal particles". Nanotechnology 15 (3): 264–268. doi:10.1088/0957-4484/15/3/005.
- Naha, Sayangdev, Sen, Swarnendu, De, Anindya K., and Puri, Ishwar K. (2007). "A detailed model for the Flame synthesis of carbon nanotubes and nanofibers". Proceedings of The Combustion Institute 31: 1821–29. doi:10.1016/j.proci.2006.07.224.
- Yamada T, Namai T, Hata K, Futaba DN, Mizuno K, Fan J, et al. (2006). "Size-selective growth of double-walled carbon nanotube forests from engineered iron catalysts". Nature Nanotechnology 1: 131–136. doi:10.1038/nnano.2006.95.
- MacKenzie KJ, Dunens OM, Harris AT (2010). "An updated review of synthesis parameters and growth mechanisms for carbon nanotubes in fluidized beds". Industrial & Engineering Chemical Research 49: 5323–38. doi:10.1021/ie9019787.
- Jakubek LM, Marangoudakis S, Raingo J, Liu X, Lipscombe D, Hurt RH (2009). "The inhibition of neuronal calcium ion channels by trace levels of yttrium released from carbon nanotubes". Biomaterials 30: 6351–6357. doi:10.1016/j.biomaterials.2009.08.009.
- Hou P-X, Liu C, Cheng H-M (2008). "Purification of carbon nanotubes". Carbon 46: 2003–2025. doi:10.1016/j.carbon.2008.09.009.
- Ebbesen TW, Ajayan PM, Hiura H, Tanigaki K (1994). "Purification of nanotubes". Nature 367: 519. doi:10.1038/367519a0.
- Xu Y-Q, Peng H, Hauge RH, Smalley RE (2005). "Controlled multistep purification of single-walled carbon nanotubes". Nano Letters 5: 163–168. doi:10.1021/nl048300s.
- Meyer-Plath A, Orts-Gil G, Petrov S et al. (2012). "Plasma-thermal purification and annealing of carbon nanotubes". Carbon 50: 3934–3942. doi:10.1016/j.carbon.2012.04.049.
- Arnold, Michael S.; Green, Alexander A.; Hulvat, James F.; Stupp, Samuel I.; Hersam, Mark C. (2006). "Sorting carbon nanotubes by electronic structure using density differentiation". Nature Nanotechnology 1 (1): 60–5. doi:10.1038/nnano.2006.52. PMID 18654143.
- Tanaka, Takeshi; Jin, Hehua; Miyata, Yasumitsu; Fujii, Shunjiro; Suga, Hiroshi; Naitoh, Yasuhisa; Minari, Takeo; Miyadera, Tetsuhiko et al. (2009). "Simple and Scalable Gel-Based Separation of Metallic and Semiconducting Carbon Nanotubes". Nano Letters 9 (4): 1497–1500. doi:10.1021/nl8034866. PMID 19243112.
- T.Tanaka. "New, Simple Method for Separation of Metallic and Semiconducting Carbon Nanotubes".
- Tanaka, Takeshi; Urabe, Yasuko; Nishide, Daisuke; Kataura, Hiromichi (2009). "Continuous Separation of Metallic and Semiconducting Carbon Nanotubes Using Agarose Gel". Applied Physics Express 2 (12): 125002. doi:10.1143/APEX.2.125002.
- Huang, Xueying; McLean, Robert S.; Zheng, Ming (2005). "High-Resolution Length Sorting and Purification of DNA-Wrapped Carbon Nanotubes by Size-Exclusion Chromatography". Anal. Chem. 77 (19): 6225–6228. doi:10.1021/ac0508954. PMID 16194082.
- Mark C Hersam (2008). "Progress towards monodisperse single-walled carbon nanotubes". Nature Nanotechnology 3 (7): 387–394. doi:10.1038/nnano.2008.135. PMID 18654561.
- Zheng, M.; Jagota, A; Strano, MS; Santos, AP; Barone, P; Chou, SG; Diner, BA; Dresselhaus, MS et al. (2003). "Structure-Based Carbon Nanotube Sorting by Sequence-Dependent DNA Assembly". Science 302 (5650): 1545–1548. doi:10.1126/science.1091911. PMID 14645843.
- Tu, Xiaomin; Manohar, Suresh; Jagota, Anand; Zheng, Ming (2009). "DNA sequence motifs for structure-specific recognition and separation of carbon nanotubes". Nature 460 (7252): 250–253. doi:10.1038/nature08116. PMID 19587767.
- Zhang, Li; Tu, Xiaomin; Welsher, Kevin; Wang, Xinran; Zheng, Ming; Dai, Hongjie (2009). "Optical characterizations and electronic devices of nearly pure (10,5) single-walled carbon nanotubes". J Am Chem Soc 131 (7): 2454–2455. doi:10.1021/ja8096674. PMID 19193007.
- Ding, Lei; Tselev, Alexander; Wang, Jinyong; Yuan, Dongning; Chu, Haibin; McNicholas, Thomas P.; Li, Yan; Liu, Jie (2009). "Selective Growth of Well-Aligned Semiconducting Single-Walled Carbon Nanotubes". Nano Letters 9 (2): 800–5. doi:10.1021/nl803496s. PMID 19159186.
- M.A. Mohamed; Ambri Mohamed, Mohd; Shikoh, Eiji; Fujiwara, Akihiko (2007). "Fabrication of spintronics device by direct synthesis of single-walled carbon nanotubes from ferromagnetic electrodes". Sci. Technol. Adv. Mater. 8 (4): 292. doi:10.1016/j.stam.2007.02.009.
- "Pirahna USV built using nano-enhanced carbon prepreg". ReinforcedPlastics.com. 19 February December 2009. Retrieved 25 February 2010.
- Pagni, John (5 March 2010). "Amroy aims to become nano-leader". European Plastics News. Retrieved 2010-11-21.
- "Nanotube Tips". nanoScince instruments.
- Haddon, Robert C.; Laura P. Zanello, Bin Zhao, Hui Hu (16). "Bone Cell Proliferation on Carbon Nanotubes". Nano Letters 6 (3): 562–567. doi:10.1021/nl051861e. PMID 16522063.
- K. Sanderson (2006). "Sharpest cut from nanotube sword". Nature News. doi:10.1038/news061113-11.
- Reibold, M.; Paufler, P; Levin, AA; Kochmann, W; Pätzke, N; Meyer, DC (November 16, 2006). "Materials:Carbon nanotubes in an ancient Damascus sabre". Nature 444 (7117): 286. doi:10.1038/444286a. PMID 17108950.
- Edwards, Brad C. (2003). The Space Elevator. BC Edwards. ISBN 0-9746517-1-0.
- Zhang, M.; Fang, S; Zakhidov, AA; Lee, SB; Aliev, AE; Williams, CD; Atkinson, KR; Baughman, RH (2005). "Strong, Transparent, Multifunctional, Carbon Nanotube Sheets". Science 309 (5738): 1215–1219. doi:10.1126/science.1115311. PMID 16109875.
- Dalton, Alan B.; Collins, Steve; Muñoz, Edgar; Razal, Joselito M.; Ebron, Von Howard; Ferraris, John P.; Coleman, Jonathan N.; Kim, Bog G. et al. (2003). "Super-tough carbon-nanotube fibres". Nature 423: 703. doi:10.1038/423703a.
- Miaudet, P.; Badaire, S.; Maugey, M.; Derré, A.; Pichot, V.; Launois, P.; Poulin, P.; Zakri, C. (2005). "Hot-Drawing of Single and Multiwall Carbon Nanotube Fibers for High Toughness and Alignment". Nano Letters 5 (11): 2212–2215. doi:10.1021/nl051419w. PMID 16277455.
- Li, Y.-L.; Kinloch, IA; Windle, AH (2004). "Direct Spinning of Carbon Nanotube Fibers from Chemical Vapor Deposition Synthesis". Science 304 (5668): 276–278. doi:10.1126/science.1094982. PMID 15016960.
- Motta, M.; Moisala, A.; Kinloch, I. A.; Windle, Alan H. (2007). "High Performance Fibres from 'Dog Bone' Carbon Nanotubes". Advanced Materials 19 (21): 3721–3726. doi:10.1002/adma.200700516.
- Koziol, K.; Vilatela, J.; Moisala, A.; Motta, M.; Cunniff, P.; Sennett, M.; Windle, A. (2007). "High-Performance Carbon Nanotube Fiber". Science 318 (5858): 1892–1895. doi:10.1126/science.1147635. PMID 18006708.
- Yang, Y.; Chen, X.; Shao, Z.; Zhou, P.; Porter, D.; Knight, D. P.; Vollrath, F. (2005). "Toughness of Spider Silk at High and Low Temperatures". Advanced Materials 17: 84–88. doi:10.1002/adma.200400344.
- Naraghi, Mohammad; Filleter, Tobin; Moravsky, Alexander; Locascio, Mark; Loutfy, Raouf O.; Espinosa, Horacio D. (2010). "A Multiscale Study of High Performance Double-Walled Nanotube−Polymer Fibers". ACS Nano 4 (11): 6463–6476. doi:10.1021/nn101404u. PMID 20977259.
- Yildirim, T.; Gülseren, O.; Kılıç, Ç.; Ciraci, S. (2000). "Pressure-induced interlinking of carbon nanotubes". Phys. Rev. B 62 (19): 19. arXiv:cond-mat/0008476. doi:10.1103/PhysRevB.62.12648.
- Postma, Henk W. Ch.; Teepen, T; Yao, Z; Grifoni, M; Dekker, C (2001). "Carbon Nanotube Single-Electron Transistors at Room temperature". Science 293 (5527): 76–9. doi:10.1126/science.1061797. PMID 11441175.
- Collins, Philip G.; Arnold, MS; Avouris, P (2001). "Engineering Carbon Nanotubes and Nanotube Circuits Using Electrical Breakdown". Science 292 (5517): 706–709. doi:10.1126/science.1058782. PMID 11326094.
- Javey, Ali; Guo, J; Wang, Q; Lundstrom, M; Dai, H (2003). "Ballistic Carbon Nanotube Transistors". Nature 424 (6949): 654–657. doi:10.1038/nature01797. PMID 12904787.
- Javey, Ali; Guo, Jing; Farmer, Damon B.; Wang, Qian; Yenilmez, Erhan; Gordon, Roy G.; Lundstrom, Mark; Dai, Hongjie (2004). "Self-aligned ballistic molecular transistors and electrically parallel nanotube arrays". Nano Letters 4 (7): 1319–1322. arXiv:cond-mat/0406494. doi:10.1021/nl049222b.
- Tseng, Yu-Chih; Xuan, Peiqi; Javey, Ali; Malloy, Ryan; Wang, Qian; Bokor, Jeffrey; Dai, Hongjie (2004). "Monolithic Integration of Carbon Nanotube Devices with Silicon MOS Technology". Nano Letters 4: 123–127. doi:10.1021/nl0349707.
- Gabriel, Jean-Christophe P. (2003). "Large Scale Production of Carbon Nanotube Transistors: A Generic Platforms for Chemical Sensors". Mat. Res. Soc. Symp. Proc. 762: Q.12.7.1.
- Nanōmix – Breakthrough Detection Solutions with the Nanoelectronic Sensation Technology. Nano.com. Retrieved on 2012-06-06.
- Gabriel, Jean-Christophe P. "Dispersed Growth Of Nanotubes on a substrate". Patent WO 2004040671A2.
- Bradley, Keith; Gabriel, Jean-Christophe P.; Grüner, George (2003). "Flexible nanotube transistors". Nano Letters 3 (10): 1353–1355. doi:10.1021/nl0344864.
- Armitage, Peter N. "Flexible nanostructure electronic devices". United States Patent 20050184641 A1.
- Kordás, K.; TóTh, G.; Moilanen, P.; KumpumäKi, M.; VäHäKangas, J.; UusimäKi, A.; Vajtai, R.; Ajayan, P. M. (2007). "Chip cooling with integrated carbon nanotube microfin architectures". Appl. Phys. Lett. 90 (12): 123105. doi:10.1063/1.2714281.
- "Nanocables light way to the future". YouTube. September 9, 2011.
- Zhao, Yao; Wei, Jinquan; Vajtai, Robert; Ajayan, Pulickel M.; Barrera, Enrique V. (September 6, 2011). "Iodine doped carbon nanotube cables exceeding specific electrical conductivity of metals". Scientific Reports (Nature) 1. doi:10.1038/srep00083.
- "Beyond Batteries: Storing Power in a Sheet of Paper". Eurekalert.org. August 13, 2007. Retrieved 2008-09-15.
- "New Flexible Plastic Solar Panels Are Inexpensive And Easy To Make". ScienceDaily. July 19, 2007.
- Guldi, Dirk M., G.M.A. Rahman, Maurizio Prato, Norbert Jux, Shubui Qin, and Warren Ford (2005). "Single-Wall Carbon Nanotubes as Integrative Building Blocks for Solar-Energy Conversion". Angewandte Chemie 117 (13): 2051–2054. doi:10.1002/ange.200462416. PMID 15724261.
- Dillon, A. C., K. M. Jones, T. A. Bekkedahl, C. H. Klang, D. S. Bethune, and M. J. Heben (1997). "Storage of hydrogen in single-walled carbon nanotubes". Nature 386 (6623): 377–379. doi:10.1038/386377a0.
- Safa, S., Mojtahedzadeh Larijani, M., Fathollahi, V., Kakuee, O. R. (2010). "Investigating Hydrogen Storage Behavior of Carbon Nanotubes at Ambient Temperature and Above by Ion Beam Analysis". NANO 5 (6): 341–347. doi:10.1142/S1793292010002256.
- Yuca, N., Karatepe, N. (2011). "Hydrogen Storage in Single-Walled Carbon Nanotubes Purified by Microwave Digestion Method". World Academy of Science, Engineering and Technology 79: 605–610.
- Halber, Deborah. MIT LEES on Batteries. Lees.mit.edu. Retrieved on 2012-06-06.
- Bourzac, Katherine. "Nano Paint Could Make Airplanes Invisible to Radar." Technology Review. MIT, 5 December 2011.
- Shi, Xinfeng; Sitharaman, Balaji; Pham, Quynh P.; Liang, Feng; Wu, Katherine; Edward Billups, W.; Wilson, Lon J.; Mikos, Antonios G. (2007). "Fabrication of porous ultra-short single-walled carbon nanotubenanocomposite scaffolds for bone tissue engineering". Biomaterials 28 (28): 4078–4090. doi:10.1016/j.biomaterials.2007.05.033. PMC 3163100. PMID 17576009.
- Sitharaman, Balaji; Shi, Xinfeng; Walboomers, X. Frank; Liao, Hongbing; Cuijpers, Vincent; Wilson, Lon J.; Mikos, Antonios G.; Jansen, John A. (2008). "In vivo biocompatibility of ultra-short single-walled carbon nanotube/biodegradable polymer nanocomposites for bone tissue engineering". Bone 43 (2): 362–370. doi:10.1016/j.bone.2008.04.013. PMID 18541467.
- Dalton, Aaron. Nanotubes May Heal Broken Bones. Wired.com (2005-08-15). Retrieved on 2012-06-06.
- Pötschke, P.; T. Andres, T. Villmow, S. Pegel, H. Brünig, K. Kobashi, D. Fischer, L. Häussler, (2010). "Liquid sensing properties of fibres prepared by melt spinning from poly(lactic acid) containing multi-walled carbon nanotubes". Composites Science and Technology 70: 343–349.
- Chen, P.; H.S. Kim, S.M. Kwon, Y. S. Yun, H. J. Jin (2009). "Regenerated bacterial cellulose/multi-walled carbon nanotubes composite fibers prepared by wet-spinning". Current Applied Physics 9: 96–99. doi:10.1016/j.cap.2008.12.038.
- Coleman, J. N.; U. Khan, W. J. Blau, Y. K. Gunko (2006). ", Small but strong: A review of the mechanical properties of carbon nanotube–polymer composites". Carbon 44: 1624–1652.
- Shim, B. S.; W. Chen, C. Doty, C. Xu, N. A. Kotov (2008). "Smart Electronic Yarns and Wearable Fabrics for Human Biomonitoring made by Carbon Nanotube Coating with Polyelectrolytes". Nano Letters 8: 4151–4157. doi:10.1021/nl801495p.
- Panhuis, M.; J. Wu, S.A. Ashraf, G.G. Wallace (2007). "Conducting textiles from single-walled carbon nanotubes". Synthetic Metals 157: 358–362.
- Hu, L.; M. Pasta, F. La Mantia, L.F. Cui, S. Jeong, H.D. Deshazer, J.W. Choi, S.M. Han, Y. Cui (2010). "Porous, and Conductive Energy Textiles". Nano Letter 10: 708–714.
- F. Alimohammadi, M. Parvinzadeh, A. shamei, Carbon Nanotube Embedded Textiles, US-Patent 2011, 0171413.
- Alimohammadi, Farbod; Mazeyar Parvinzadeh Gashti, Ali Shamei. "Functional cellulose fibers via polycarboxylic acid/carbon nanotube composite coating". , Journal of Coatings Technology and Research. doi:10.1007/s11998-012-9429-3.
- Alimohammadi, Farbod; M. Parvinzadeh Gashti, A. Shamei (2012). "A novel method for coating of carbon nanotube on cellulose fiber using 1,2,3,4-butanetetracarboxylic acid as a cross-linking agent". Progress in Organic Coatings 74: 470– 478.
- "Super-nanotubes: ‘remarkable’ spray-on coating combines carbon nanotubes with ceramic". KurzweilAI. Retrieved 2013-04-20.
- Bhandavat, R.; Feldman, A.; Cromer, C.; Lehman, J.; Singh, G. (2013). "Very High Laser-Damage Threshold of Polymer-derived Si(B)CN- Carbon Nanotube Composite Coatings". ACS Applied Materials & Interfaces 5 (7): 2354. doi:10.1021/am302755x.
- Simmons, Trevor; Hashim, D; Vajtai, R; Ajayan, PM (2007). "Large Area-Aligned Arrays from Direct Deposition of Single-Wall Carbon Nanotubes". J. Am. Chem. Soc. 129 (33): 10088–10089. doi:10.1021/ja073745e. PMID 17663555.
- Hot nanotube sheets produce music on demand, New Scientists News, 31 October 2008
- Matson, Michael L; Wilson, Lon J (2010). "Nanotechnology and MRI contrast enhancement". Future Medicinal Chemistry 2 (3): 491–502. doi:10.4155/fmc.10.3. PMID 21426177.
- Zhang, J.; Liu, X.; Blume, R.; Zhang, A.; Schlögl, R.; Su, D. S. (2008). "Surface-Modified Carbon Nanotubes Catalyze Oxidative Dehydrogenation of n-Butane". Science 322 (5898): 73–77. doi:10.1126/science.1161916. PMID 18832641.
- Frank, B.; Blume, R.; Rinaldi, A.; Trunschke, A.; Schlögl, R. (2011). "Oxygen Insertion Catalysis by sp2 Carbon". Angew. Chem. Int. Ed. 50 (43): 10226–10230. doi:10.1002/anie.201103340.
- Halford, Bethany (9 February 2009). "Nanotube Catalysts". Chemical & Engineering News 87 (6): 7. doi:10.1021/cen-v087n006.p007a.
- Monthioux, Marc; Kuznetsov, V (2006). "Who should be given the credit for the discovery of carbon nanotubes?" (PDF). Carbon 44 (9): 1621. doi:10.1016/j.carbon.2006.03.019.
- Oberlin, A.; Endo, M.; Koyama, T. (1976). "Filamentous growth of carbon through benzene decomposition". Journal of Crystal Growth 32 (3): 335–349. doi:10.1016/0022-0248(76)90115-9.
- Endo, Morinobu; Dresselhaus, M. S. (October 26, 2002). "Carbon Fibers and Carbon Nanotubes (Interview, Nagano, Japan)" (PDF).
- Abrahamson, John; Wiles, Peter G.; Rhoades, Brian L. (1999). "Structure of Carbon Fibers Found on Carbon Arc Anodes". Carbon 37 (11): 1873. doi:10.1016/S0008-6223(99)00199-2.
- Izvestiya Akademii Nauk SSSR, Metals. 1982, #3, pp.12–17 (in Russian)
- US 4663230, Tennent, Howard G., "Carbon fibrils, method for producing same and compositions containing same", issued 1987-05-05
- Iijima, Sumio (7 November 1991). "Helical microtubules of graphitic carbon". Nature 354 (6348): 56–58. doi:10.1038/354056a0.
- Mintmire, J.W.; Dunlap, BI; White, CT (1992). "Are Fullerene Tubules Metallic?". Phys. Rev. Lett. 68 (5): 631–634. doi:10.1103/PhysRevLett.68.631. PMID 10045950.
- Bethune, D. S.; Klang, C. H.; De Vries, M. S.; Gorman, G.; Savoy, R.; Vazquez, J.; Beyers, R. (1993). "Cobalt-catalyzed growth of carbon nanotubes with single-atomic-layer walls". Nature 363 (6430): 605–607. doi:10.1038/363605a0.
- Iijima, Sumio; Ichihashi, Toshinari (1993). "Single-shell carbon nanotubes of 1-nm diameter". Nature 363 (6430): 603–605. doi:10.1038/363603a0.
- "The Discovery of Single-Wall Carbon Nanotubes at IBM". IBM.
- Krätschmer, W.; Lamb, Lowell D.; Fostiropoulos, K.; Huffman, Donald R. (1990). "Solid C60: a new form of carbon". Nature 347 (6291): 354–358. doi:10.1038/347354a0.
- Kroto, H. W.; Heath, J. R.; O'Brien, S. C.; Curl, R. F.; Smalley, R. E. (1985). "C60: Buckminsterfullerene". Nature 318 (6042): 162–163. doi:10.1038/318162a0.
|Wikimedia Commons has media related to: Carbon nanotube|
- Nanohedron.com image gallery with carbon nanotubes
- The stuff of dreams, CNET
- The Nanotube site. Last updated 2009.05.03
- EU Marie Curie Network CARBIO: Multifunctional carbon nanotubes for biomedical applications
- Carbon nanotube on arxiv.org
- C60 and Carbon Nanotubes a short video explaining how nanotubes can be made from modified graphite sheets and the three different types of nanotubes that are formed
- Carbon Nanotubes & Buckyballs.
- The Wondrous World of Carbon Nanotubes
- Learning module for Bandstructure of Carbon Nanotubes and Nanoribbons
- Durability of carbon nanotubes and their potential to cause inflammation by Dr Megan Osmond and others. (SafeWork Australia, May 2011). This was a collaboration between the Institute of Occupational Medicine, Edinburgh University and CSIRO in Australia.
- NT06 Seventh International Conference on the Science and Application of Nanotubes
- NT05 Sixth International Conference on the Science and Application of Nanotubes
- Selection of free-download articles on carbon nanotubes | http://en.wikipedia.org/wiki/Carbon_nanotube | 13 |
22 | Algorithms and Arithmetic in Everyday Mathematics
Everyday Mathematics Parent Handbook
Algorithms and Arithmetic in
An algorithm is a set of rules for solving a math problem which, if done properly, will give a correct answer each time.
Algorithms generally involve repeating a series of steps over and over, as in the borrowing and carrying algorithms and in the long multiplication and division algorithms. The Everyday Mathematics program includes a variety of suggested algorithms for addition, subtraction, multiplication and division. Current research indicates a number of good reasons for this primarily, that students learn more about numbers, operations, and place value when they explore math using different methods.
Arithmetic computations are generally performed in one of three ways: (1) mentally, (2) with paper and pencil, or (3) with a machine, e.g. calculator or abacus. The method chosen depends on the purpose of the calculation. If we need rapid, precise calculations, we would choose a machine. If we need a quick, ballpark estimate or if the numbers are easy, we would do a mental computation.
The learning of the algorithms of arithmetic has been, until recently, the core of mathematics programs in elementary schools. There were good reasons for this. It was necessary that students have reliable, accurate methods to do arithmetic by hand, for everyday life, business, and to support further study in mathematics and science. Todays society demands more from its citizens than knowledge of basic arithmetic skills. Our students are confronted with a world in which mathematical proficiency is essential for success. There is general agreement among mathematics educators that drill on paper/pencil algorithms should receive less emphasis, and that more emphasis be placed on areas like geometry, measurement, data analysis, probability and problem solving, and that students be introduced to these subjects using realistic problem contexts. The use of technology, including calculators, does not diminish the need for basic knowledge, but does provide children with opportunities to explore and expand their problem solving capabilities beyond what their pencil-and-paper arithmetic skills may allow.
Sample Algorithms: Below are examples of a few procedures that have come from childrens mental arithmetic efforts. Each is a legitimate algorithm, that is, a set of rules that if properly followed yields a correct result. As parents, you need to be accepting and encouraging when your children attempt these computational procedures. As they experiment and share their solution strategies, please allow their ideas to flourish. If you are not comfortable with the vocabulary of arithmetic, you may want to review the glossary entries for addition, subtraction, multiplication and division before reading the sample algorithms.
1. Left-to-right Algorithm
2. Partial-Sums Algorithm
3. Rename-Addends Algorithm (Opposite Change)
If a number is added to one of the addends and the same number is subtracted from the other addend, the result remains the same. The purpose is to rename the addends so that one of the addends ends in zeros.
4. Counting-On Algorithm
1. Add-Up Algorithm
2. Left-to-Right Algorithm
3. Rename Subtrahend Algorithm (also called Same Change)
If the same number is added to or subtracted from both the minuend (top number) and subtrahend (bottom number), the result remains the same. The purpose is to rename both the minuend and the subtra-hend so that the subtrahend ends in zero.
This type of solution method shows a strong ability to hold and manipulate numbers mentally.
4. Two Unusual Algorithms
In Third Grade Everyday Mathematics, a partial-products algorithm is the initial approach to solving multiplication problems with formal paper-and-pencil proce-dures. This algorithm is done from left to right, so that the largest partial product is calculated first. As with left-to-right algorithms for addition, this encourages quick estimates of the magnitude of products without neces-sarily finishing the procedure to find exact answers. To use this algorithm efficiently, students need to be very good at multiplying multiples of 10, 100, and 1000. The fourth-grade program contains a good deal of practice and review of these skills, which also serve very well in making ballpark estimates in problems that involve multiplication or division, and introduces the * as a symbol of multiplication.
1. Partial-Product Algorithm
2. Modified Standard U.S. Algorithms
A Division Algorithm
The key question to be answered in many problems is, How many of these are in that, or How many n's are in m? This can be expressed as division: m divided by n, or m/n.
One way to solve division problems is to use an algorithm that begins with a series of at least/less than estimates of how many ns are in m. You check each estimate. If you have not taken out enough ns from the ms, take out some more; when you have taken out all there are, add the interim estimates.
You would record 10 as your first estimate and remove (subtract) ten 12s from 158, leaving 38. The next question is, How many 12s are in the remaining 38? You might know the answer right away (since three 12s are 36), or you might sneak up on it: More than 1, more than 2, a little more than 3, but not as many as 4. Taking out three 12s leaves 2, which is less than 12, so you can stop estimating.
To obtain the final result, you would add all of your estimates (10 + 3 = 13) and note what, if anything, is left over (2). There is a total of thirteen 12s in 158; 2 is left over. The quotient is 13, and the remainder is 2.
The examples show one method of recording the steps in the algorithm.
One advantage of this algorithm is that students can use numbers that are easy for them to work with. Students who are good estimators and confident of their extended multiplication facts will need to make only a few estimates to arrive at a quotient, while others will be more comfortable taking smaller steps. More important than the course a student follows is that the student understands how and why this algorithm works and can use it to get an accurate answer.
A student with good number sense might answer, At least one-tenth, since 0.1 * 12 is 1.2, but less than two-tenths, since 0.2 * 2 = 2.4. The answer then could be l3.1 (12s) in 158, and a little bit left over.
The question behind this algorithm, How many of these are in that? also serves well for estimates where the information is given in scientific notation (see glossary). The uses of this algorithm with problems that involve scientific notation or decimal information will be explored briefly in grades 5 and 6, mainly to build number sense and understanding of the meanings of division.
An algorithm is any series of steps which, if followed properly, always yield a correct result. There are many ways to add, subtract, divide, and multiply that meet this definition. Your child will learn to compute accurately and quickly.
Children gain valuable confidence and insight when permitted to explore algorithms of their own invention. A given child may be more comfortable with this way or that. A given approach may be more useful for this problem or that one.
Although you probably learned only one or two algorithms for each kind of arithmetic, it is important that you support your childs use of many. In fact, if you closely observe your own computations in a variety of real-life settings counting change, making estimates, balancing your checkbook, etc. you will probably find that you use different algorithms at different times, and some of them are probably your own inventions.
Instruction Home | AAPS Home | http://instruction.aaps.k12.mi.us/EM_parent_hdbk/algorithms.html | 13 |
24 | I have discussed the importance of understanding logical form and how to create formal counterexamples. Understanding logic well is a lot easier when we know something about logical validity, and one way to better understand logical validity is to consider an argument that proves an argument to be valid. If we can know why an argument can be valid, then we can know more about logical validity in general. I will now produce a proof of logical validity here. It can take some time to understand the proof, so you might want to take your time to read it carefully.
Consider the following argument:
- If dogs are mammals, then they’re animals.
- Dogs are mammals.
- Therefore, dogs are animals.
This argument is logically valid. If the premises are true, then the conclusion must be true because it has a valid argument form. The argument form is the following:
- If A, then B.
- Therefore, B.
We can know that this argument is valid simply by knowing what “If A, then B” means. It means “If A is true, then B is true” or “B is true whenever A is true.” Since, A is true, B also has to be true because that’s what “If A, then B” means.
Nonetheless, we could construct a proof for this argument. Consider the following:
We can summarize the proof using the following words:
- We can assume the argument form is invalid. In that case the premises can be true and the conclusion can be false at the same time. Let’s assume the premises are true and the conclusion is false.
- If this assumption is impossible because it implies a contradiction, then we know the argument can’t be invalid (and must be valid).
- In that case “If A, then B” is true because it’s a premise, A is true because it’s a premise, and B is false because it’s a conclusion.
- In that case A must also be false because “if A (is true), then B (is true)” is assumed to be true, and B is assumed to be false. B is true whenever A is true, but B is false, so A must be false. (Consider the statement, “If dogs are mammals, then dogs are animals.” If we find out that dogs aren’t animals, then they can’t be mammals either. If the second part of a conditional statement is false, then the first part must be false.)
- Therefore, A is true and false.
- Therefore, the assumption that the argument form is invalid leads to a contradiction and must be false.
- Therefore, the argument must be valid.
We generally demand that our arguments are logically valid and we have an intuitive grasp about what it means for an argument to be logically valid. Validity isn’t sufficient to have a good argument, but it’s generally a very important element for constructing good arguments. Knowing more about logical validity beyond the intuitive level can help us achieve clarity and improve our thinking. Knowing why an argument is valid can help us achieve these goals.
Update (6/21/2011): I updated my proof that the argument form is valid because the other proof I gave was circular and therefore unpersuasive. | http://ethicalrealism.wordpress.com/2011/06/19/proving-an-argument-is-logically-valid/ | 13 |
20 | CHAPTER 2: FREEDOM OF SPEECH AND ARTICLE 9 OF THE BILL OF RIGHTS
Article 9 of the Bill of Rights 1689
36. A primary function of Parliament is to debate and pass resolutions freely on subjects of its own choosing. This is a cornerstone of parliamentary democracy. The performance of this function is secured by the members of each House having the right to say what they will (freedom of speech) and discuss what they will (freedom of debate). These freedoms, the single most important parliamentary privilege, are enshrined in article 9 of the Bill of Rights 1689. Using modern spelling, article 9 provided:
`That the freedom of speech and debates or proceedings in Parliament ought not to be impeached or questioned in any court or place out of Parliament.'
In this article the meaning of `impeach' is not clear: possible meanings include hinder, challenge and censure.
37. Over the years this article has been the subject of many legal decisions. Even so, uncertainty remains on two basic points: what is covered by `proceedings in Parliament', and what is meant by `impeached or questioned in any . . . place out of Parliament'. A definitive history of the origins of article 9 has yet to be written, but one thing is reasonably clear: the principal purpose was to affirm the House's right to initiate business of its own and to protect members from being brought before the courts by the Crown and accused of seditious libel. Article 9 also reasserted the long established claim not to be answerable before any court for words spoken in Parliament. The modern interpretation is now well established: that article 9 and the constitutional principle it encapsulates protect members of both Houses from being subjected to any penalty, civil or criminal, in any court or tribunal for what they have said in the course of proceedings in Parliament.
38. This immunity is wide. Statements made in Parliament may not even be used to support a cause of action arising out of Parliament, as where a plaintiff suing a member for an alleged libel on television was not permitted to rely on statements made by the member in the House of Commons as proof of malice. The immunity is also absolute: it is not excluded by the presence of malice or fraudulent purpose. Article 9 protects the member who knows what he is saying is untrue as much as the member who acts honestly and responsibly. Nor is the protection confined to members. Article 9 applies to officers of Parliament and non-members who participate in proceedings in Parliament, such as witnesses giving evidence to a committee of one of the Houses. In more precise legal language, it protects a person from legal liability for words spoken or things done in the course of, or for the purposes of or incidental to, any proceedings in Parliament.
39. A comparable principle exists in court proceedings. Statements made by a judge or advocate or witness in the course of court proceedings enjoy absolute privilege at common law against claims for defamation. The rationale in the two cases is the same. The public interest in the freedom of speech in the proceedings, whether parliamentary or judicial, is of a high order. It is not to be imperilled by the prospect of subsequent inquiry into the state of mind of those who participate in the proceedings even though the price is that a person may be defamed unjustly and left without a remedy.
40. It follows we do not agree with those who have suggested that members of Parliament do not need any greater protection against civil actions than the qualified privilege enjoyed by members of elected bodies in local government. Unlike members of Parliament, local councillors are liable in defamation if they speak maliciously. We consider it a matter of the utmost importance that there should be a national public forum where all manner of persons, irrespective of their power or wealth, can be criticised. Members should not be exposed to the risk of being brought before the courts to defend what they said in Parliament. Abuse of parliamentary freedom of speech is a matter for internal self-regulation by Parliament, not a matter for investigation and regulation by the courts. The legal immunity principle is as important today as ever. The courts have a duty not to erode this essential constitutional principle.
41. Thus far there is no difficulty or uncertainty. Nor is there any difficulty in the official report of parliamentary debates (Hansard) being used in court to establish what was said and done in Parliament as a matter of history: for example, that a particular member made a speech as reported on a particular day. But this leaves open the question whether the article 9 prohibition on the `questioning' of parliamentary proceedings in court does, or should, extend more widely than to afford legal immunity. Should article 9 preclude a court from relying on a statement made in Parliament even when this would not involve impugning the motives or reliability of the member who made the statement and would not result in the member being exposed to any civil or criminal liability? Another issue concerns the interaction of article 9 and court proceedings for the judicial review of ministerial decisions. These issues, touching and concerning parliamentary freedom of speech, are of basic importance. They are also complex, and examining them calls for close analysis.
`Ought not to be questioned': recent developments
42. As a prelude, a practical point should be noted. The use of reports of debates in court proceedings was facilitated by the removal of a formal obstacle comparatively recently. From at least 1818 the practice in the House of Commons was that its debates and proceedings could not be referred to in court proceedings without the leave of the House. Petitions for leave were rarely refused, and in order to save parliamentary time the House decided in 1981 to discontinue the need for such leave. When doing so the House expressly re-affirmed the status of proceedings in Parliament confirmed by article 9 of the Bill of Rights. The practice of requiring leave to refer to proceedings was never followed in the House of Lords. One effect of the 1981 change has been that the use of Hansard in court proceedings has increased. The oft quoted statement of Blackstone in his celebrated eighteenth century Commentaries that `whatever matter arises concerning either House of Parliament, ought to be examined, discussed, and adjudged in that House to which it relates, and not elsewhere' is now accepted as being too wide and sweeping.
Pepper v Hart
43. One of the uses the courts now make of parliamentary proceedings is as an aid when interpreting Acts of Parliament. This follows from the decision in Pepper v Hart. The case concerned the proper meaning of a taxation provision. Mr Hart was a schoolmaster at a fee-paying school which operated a concessionary fee scheme enabling members of staff to have their sons educated at the school at reduced fees if surplus places were available. Tax was payable by Mr Hart on `the cash equivalent of the benefit', but the statutory definition of that expression was ambiguous. During the committee stage of the Finance Bill in the House of Commons the financial secretary to the Treasury indicated that the basis of taxation for certain benefits in kind would remain the cost to the employer of providing the service. When pressed he interpreted this as being, in effect, the extra cost caused by the provision of the benefit in question. In Mr Hart's case the actual additional cost to the employer was negligible, because boys educated through the scheme were filling places which otherwise would have been empty. However, relying on the wording in the Act, the Inland Revenue had taxed a proportion of the total cost of providing the services.
44. The House of Lords in its judicial capacity decided that clear statements made in Parliament concerning the purpose of legislation in course of enactment may be used by the court as a guide to the interpretation of ambiguous statutory provisions. The Lords held such use of statements did not infringe article 9 because it did not amount to questioning a proceeding in Parliament. Far from questioning the independence of Parliament and its debates, the courts would be giving effect to what was said and done there. Lord Browne-Wilkinson said:
`I trust when the House of Commons comes to consider the decision in this case, it will be appreciated that there is no desire to impeach its privileges in any way. Your Lordships are motivated by a desire to carry out the intentions of Parliament in enacting legislation and have no intention or desire to question the processes by which such legislation was enacted or of criticising anything said by anyone in Parliament in the course of enacting it. The purpose is to give effect to, not thwart, the intentions of Parliament.'
A similar principle had already been adopted in Australia and New Zealand before the English decision in Pepper v Hart. It had also been adopted earlier in England, in Pickstone v Freemans, in the context of subordinate legislation, but in that case the admissibility of the parliamentary material seems not to have been questioned.
45. Parliament must be vigilant in protecting its freedom of speech. Any departure by the courts from hitherto accepted practice must be scrutinised thoroughly to see whether, as a matter of principle and practice, it is justifiable. Applying that test the Joint Committee is of the view that the development outlined above in Pepper v Hart is unobjectionable. This use of parliamentary proceedings is benign. The Joint Committee recommends that Parliament should not disturb the decision in Pepper v Hart. However, it is important that this specific court decision should not lead to any general weakening of the prohibition contained in article 9.
Judicial review of ministerial decisions
46. A second and perhaps more important question concerns the use now made of parliamentary proceedings in court proceedings brought for the judicial review of ministerial decisions. Judicial review is the court procedure whereby the High Court reviews the lawfulness of administrative decisions, including ministers' decisions, as well as decisions of lower courts and tribunals. Ministers' powers are limited, and in judicial review proceedings relating to a ministerial decision the court is asked to decide whether the minister acted outside his powers. He might have done so, for instance, by failing to take into account some important matter he should have had in mind or by misdirecting himself on the purpose for which a particular statutory power could be used. The court does not substitute its own discretion for that of the minister. If the minister acted within his powers his decision will stand. If he acted outside his powers his decision was unlawful and the court may quash it. It will then be for the minister to consider the matter afresh.
47. In the last 30 years the courts have developed enormously the ambit of judicial review. Also, since 1979, Parliament has increased its scrutiny of decisions of ministers and government departments through the operation of select departmental and scrutiny committees. Both developments derived from the scope and complexity of modern government and the extent to which its policies, decisions and administrative actions impinge upon the citizen. Parliament makes the law and, politically, calls the government to account for its actions. But the government is also subject to the law and is therefore answerable to the courts if it exceeds or misapplies its powers. If Parliament and the courts respect and support each other's essential functions, they will provide a formidable safeguard against the abuse of power by the executive. Professor Anthony Bradley, whose evidence mainly supported the traditional privileges and powers of Parliament, was here a little reproving:
`The existence of an effective system of administrative law does not conflict with the role of Parliament . . . Because a central feature of the British system of government is the responsibility of ministers to Parliament, the same executive decision may give rise to review on legal grounds by the courts, to debate and questioning on political grounds by the House and to detailed criticism and scrutiny by parliamentary committees. Parliamentarians who are sensitive to the public law role of the courts may find it difficult to accept that judicial review and ministerial responsibility serve complementary purposes and are not mutually exclusive, and that a controversial political decision may give rise both to parliamentary debate and judicial review'.
48. Article 9 becomes germane when judicial review proceedings relate to a ministerial decision announced, or subsequently explained, in the House. Typically, in the court proceedings the applicant quotes an extract from the official report and then sets out his grounds for challenging the lawfulness of the decision in the light of the reasons given by the minister.
49. Use of Hansard in this way has now occurred sufficiently often for the courts to regard it as established practice. Some examples will suffice as illustrations. In several cases challenges were made to the lawfulness of successive policy statements, announced in Parliament, regarding changes in the system for the parole of prisoners. In each case the court proceedings involved scrutinising the ministerial decisions and the explanations given by the minister in Parliament. In Brind (broadcasting restrictions on terrorists) a ministerial statement in Parliament was used as evidence that the minister had exercised his power properly. In the Pergau Dam case evidence given by the minister and an official to committees of the House of Commons was used in support of a successful claim that the decision to grant aid for the construction of the Pergau Dam in Malaysia did not accord with the enabling Act. In a criminal injuries compensation case, the Home Secretary announced in Parliament his decision not to bring into force the statutory compensation scheme but instead to introduce a tariff-based scheme under prerogative powers. In none of these cases does any argument seem to have been advanced, by the government or anyone else, about the admissibility in evidence or the use in court of the statements made in Parliament. Indeed, the practice in court is for both the applicants and the government to use the official reports of both Houses to indicate what is the government's policy in a particular area.
50. We believe Parliament should welcome this recent development. The development represents a further respect in which acts of the executive are subject to a proper degree of control. It does not replace or lessen in any way ministerial accountability to Parliament. It may reinforce it: by their nature judicial review proceedings are seldom, if ever, subject to reporting restrictions and their outcome may be used to pursue the political debate. Both parliamentary scrutiny and judicial review have important roles, separate and distinct, in a modern democratic society. Parliament must retain the right to legislate and take political decisions, but only the courts can set aside an unlawful ministerial decision.
51. The contrary view would have bizarre consequences. This may be why objection has never been taken in court to the admissibility of this evidence. Challenges to the legality of executive decisions could be hampered by ring-fencing what ministers said in Parliament and excluding such statements from the purview of the courts. Ministerial decisions announced in Parliament would be less readily open to examination than other ministerial decisions. That would be an ironic consequence of article 9. Intended to protect the integrity of the legislature from the executive and the courts, article 9 would become a source of protection for the executive from the courts. We do not believe Parliament would wish this to be so. Rather, when challenging a minister's decision an applicant for judicial review should be as free to criticise the minister's reasons expressed in Parliament as those stated elsewhere. An applicant must be at liberty to use a statement made by a minister in Parliament as evidence that the minister misdirected himself or acted for an unauthorised purpose just as much as he can rely on the contents of a departmental letter.
52. A claim that a minister acted in bad faith would be rare, but the underlying principle should be the same even in such an exceptional case. The applicant should be entitled to point to ministerial statements and claim that the minister misled Parliament, even deliberately, if there are good grounds for believing this may be so and this is relevant to the issues arising in the proceedings. It is difficult to see how it could make sense for the courts to be permitted to look at ministerial statements made in Parliament and infer that the minister inadvertently misdirected himself and on that ground set aside his decision, but not be allowed to adjudicate upon a claim that the minister had erred more grievously by knowingly misusing a power. Any question of a minister knowingly misleading the House would also be a serious contempt of Parliament, and would have grave parliamentary consequences.
53. A practical note should be added. Ministerial statements in Parliament take several forms. They include prepared statements, speeches during debates in the chamber or in committee, written and oral answers to questions, replies to adjournment debates and evidence to select committees. As a matter of principle it is not possible to draw a distinction between these different forms, and in practice the courts look at them all. Parliament would expect the courts to make appropriate allowances for extempore answers, and there is every indication they have done so and will continue to do so.
54. A cautionary warning must also be added on a point of constitutional importance. Since a ministerial decision may be debated in Parliament and also subjected to judicial review proceedings in court, it is possible that parliamentary proceedings and court proceedings regarding the same decision may take place simultaneously. This occurred in 1993, on an occasion of political sensitivity. On 20 July 1993 the House of Lords gave the politically controversial European Communities (Amendment) Bill its third reading. Meanwhile on 16 July Lord Rees-Mogg had applied to the court for a declaration that the United Kingdom could not lawfully ratify the treaty on European Union signed at Maastricht in February 1992, and for an order to quash the decision of the Foreign Secretary to proceed to ratify the treaty. This was seen in parts of the House of Commons as an attempt to influence a political debate by judicial means. The Speaker rightly expressed the view that the House was entitled to expect that when the case came on for hearing, the Bill of Rights would be fully respected by all those appearing before the court. Clearly, there is scope here for abuse. The courts must be vigilant to ensure that judicial processes are not used for political ends in a manner which interferes with Parliament's conduct of its business.
55. The Joint Committee recommends that article 9 should not be interpreted as precluding the use of proceedings in Parliament in court for the purpose of judicial review of governmental decisions.
Other court proceedings and ministerial decisions
56. The appropriate method for challenging in court the lawfulness of a ministerial decision is usually by judicial review proceedings. Sometimes a ministerial decision may affect rights of an individual whose protection lies in a different form of court proceedings. An instance would be if a minister were to make a statement in Parliament about an official in his own department in terms that the official then wished to use in support of a claim for constructive dismissal.
57. Similar considerations apply here to those discussed above regarding judicial review. The minister is accountable to Parliament for his decision. His statement is properly made in Parliament but it ought not, for that reason, to be excluded from the evidence the court can examine when the minister's decision is in issue in court proceedings. Unlike judicial review, these court proceedings will be concerned with the effect of a ministerial decision; for instance, whether the official was correctly dismissed. This difference should not lead to any difference in treatment so far as article 9 is concerned.
58. The Joint Committee does not know of any proceedings where this point has actually arisen in court. We are aware of one instance where an official wished to use such a statement in proceedings before an industrial tribunal, but decided not to go ahead. We expect that if the point were to arise in the course of proceedings, the court or tribunal would be likely in practice to look at the extract from Hansard. The contrary view, cloaking an executive statement with parliamentary immunity, would be most unjust. We believe Parliament would benefit by expressly accepting the principle involved.
59. We recommend that the exception of judicial review proceedings from the scope of article 9 should apply also to other proceedings in which a government decision is material.
Issues arising from section 13 of the Defamation Act 1996
60. As already noted, Pepper v Hart should be regarded as a benign, non-critical use of parliamentary proceedings in court, and judicial review as an exceptional use of them because of the intrusion of the executive element. A further issue which arises is more general. It is whether article 9 should be interpreted today as going beyond conferring absolute immunity from legal liability. Should article 9 protect a speech made in Parliament from critical examination in court even though this would not expose the member to any legal liability?
61. The situation posed by this question may arise in either the criminal or civil field. An example in the field of criminal law would be the case of a member of Parliament who defames an individual in the course of a debate in one of the Houses. The individual, prevented by article 9 from suing for defamation, takes matters into his own hands and assaults the member. He is then prosecuted. In his defence or by way of mitigation in the criminal proceedings, the assailant wishes to put forward the defamatory statement and assert the member acted maliciously. As the law stands this would be a breach of article 9. But the accused should surely be permitted to pursue this course, which might affect his sentence if nothing else. A fair criminal trial could not take place if he were refused the opportunity. The only other alternative would be to stop the case, on the basis that the accused would not be able to have a fair trial. This also would be an unfortunate course to take with a criminal charge, and not in the public interest.
62. There is a civil counterpart of the criminal example just given. It is the case of the member who sues a non-member for defamation. In his defence the non-member asserts he was justified in saying what he did, and wishes to rely on statements made by the member in parliamentary proceedings. This was the situation which arose in 1995 in a libel action brought by a member of Parliament, Mr Neil Hamilton, and a political lobbyist, Mr Ian Greer, against The Guardian newspaper over allegations that Mr Hamilton had made corrupt use of his right to ask questions of ministers and had received money via Mr Greer's company (`cash for questions'). In its defence the newspaper sought to justify what it had written by calling evidence about Mr Hamilton's conduct and motives in tabling parliamentary questions and early day motions. The judge found this was contrary to article 9. He stopped the proceedings on the ground that it would not be fair to allow the plaintiffs to sue for libel if the defendant newspaper was not permitted to justify what it had written.
63. This had the effect of denying the plaintiffs a forum for establishing that The Guardian allegations were untrue and, if untrue, receiving financial recompense. In other words, unlike any other citizen, a member of either House could not sue to clear his name if he was alleged to have acted dishonestly in connection with his parliamentary duties.
64. This situation is not unique to this country. The problem arose in two cases in New Zealand and Australia in 1970 and 1990. There the actions were tried on their merits. In the 1970 case in New Zealand no question of privilege seems to have been raised. In the 1990 case in Australia, the South Australian Supreme Court found a way around the difficulty by holding that privilege does not extend to prevent challenges to the truth or honesty of statements made in Parliament where the maker of the statement himself initiates the proceedings.
65. In a later case in New Zealand in 1992, Prebble v Television New Zealand , the issue of privilege was raised. On appeal the judicial committee of the Privy Council decided that article 9 and the wider principle of separation of powers preclude the court from examining the truth or propriety of statements made in Parliament even where this will not expose the statement maker to any criminal or civil penalty. The judicial committee disapproved of the course taken in the earlier Australian and New Zealand cases, and preferred the larger body of United Kingdom and Commonwealth precedents which took a more restrictive view. They held that the privilege protected by article 9 is the privilege of Parliament and the actions of an individual member cannot determine whether or not the privilege should apply. In his judgment Lord Browne-Wilkinson said section 16(3) of the Parliamentary Privileges Act 1987 (Australia) was a correct statement of the effect of article 9 of the Bill of Rights.
66. Section 16 of the Parliamentary Privileges Act 1987 (Australia) was enacted in response to a surge of judicial interventionism in New South Wales in the 1980s. In two criminal cases cross-examination of defendants was permitted on evidence they had given to Senate committees. Not surprisingly, an interpretation of article 9 having this effect was rejected by the Australian Federal Parliament. Section 16(3) provides:
`In proceedings in any court or tribunal, it is not lawful for evidence to be tendered or received, questions asked or statements, submissions or comments made, concerning proceedings in Parliament, by way of, or for the purpose of:
(a) questioning or relying on the truth, motive, intention or good faith of anything forming part of those proceedings in Parliament;
(b) otherwise questioning or establishing the credibility, motive, intention or good faith of any person; or
(c) drawing, or inviting the drawing of, inferences or conclusions wholly or partly from anything forming part of these proceedings in Parliament.'
67. Section 13 of the Defamation Act 1996 was intended to remedy the injustice perceived to exist in the Hamilton type of case. The text of section 13 (set out in annex A) enables a person, who may be a member of either House or of neither House, to waive parliamentary privilege so far as he is concerned, for the purposes of defamation proceedings. The essential protection of members against legal liability for what they have said or done in Parliament remains and cannot be waived.
68. Unfortunately the cure that section 13 seeks to achieve has severe problems of its own and has attracted widespread criticism, not least from our witnesses. A fundamental flaw is that it undermines the basis of privilege: freedom of speech is the privilege of the House as a whole and not of the individual member in his own right, although an individual member can assert and rely on it. Application of the new provision could also be impracticable in complicated cases; for example, where two members, or a member and a non-member, are closely involved in the same action and one waives privilege and the other does not. Section 13 is also anomalous: it is available only in defamation proceedings. No similar waiver is available for any criminal action, or any other form of civil action.
69. The Joint Committee considers these criticisms are unanswerable. The enactment of section 13, seeking to remedy a perceived injustice, has created indefensible anomalies of its own which should not be allowed to continue. The Joint Committee recommends that section 13 should be repealed.
70. Yet there is a problem here. In practice, neither House now treats the libel of one of its members as a contempt, nor is either House equipped to hear libel cases even if such a course were publicly acceptable. In the Hamilton type of case it is, on the one hand, unthinkable that if the media criticise those who have been elected to power, the media should not be free to establish the truth of their criticisms. As was pointed out by Lord Browne-Wilkinson in the Prebble decision, were this not so the results could be `chilling' to the proper monitoring of members' behaviour. On the other hand, if the law is left as enunciated in Prebble, members criticised outside Parliament and accused of misconduct in the performance of their parliamentary duties can find themselves wholly unable to clear their names. This undesirable state of affairs could even, in turn, encourage irresponsible media comment. Commentators would rest secure in the knowledge they could not be called to account in court for allegations of parliamentary misconduct. The difficulty lies in resolving this conflict.
71. The law is, of course, frequently faced with the need to resolve conflicts where one consideration pulls one way and another consideration pulls in a different direction. Sometimes one interest has to be preferred to the other. This has happened in the situation now under discussion. The courts have been properly anxious to keep clear of interfering with Parliament in the conduct of its affairs. There could therefore be no question of the courts investigating the allegations of parliamentary misconduct. They have had to choose between two injustices: injustice to the plaintiff, by not letting him have the opportunity to clear his name, and injustice to the defendant, by not letting him raise a defence of justification when this would require investigation of parliamentary proceedings. The courts have decided the loss should be left to lie where it falls. If a libel action brought by a member cannot be properly tried out on its merits, then it must be stopped, even though this will leave the defamed member without a remedy.
72. We have considered whether there is a third alternative, which will enable justice to be done to both parties: to permit the courts to investigate the alleged misconduct. One way of achieving this in a principled fashion would be that, instead of a member having power to waive article 9, as is the position under section 13 of the Defamation Act 1996, the House itself should be empowered to waive the article 9 privilege in any case where no question arises of the member making the statement being at risk of incurring legal liability. The existence of such a power would enable Parliament to meet the perceived injustice in the Hamilton type of case and in its criminal counterpart. If a member, placed as was Mr Hamilton, started a defamation action, the defendant newspaper would be entitled to seek to prove the truth of its allegations. The member, in turn, would have an opportunity to vindicate himself. In this way justice would be done to both parties, but at the same time the vital constitutional principle of freedom of speech in Parliament would be preserved. When they speak in Parliament members would have, as now, complete confidence that no legal liability could attach to them in consequence.
73. A waiver would not be confined to members or others who consent to waiver of the privilege, nor would it be confined to persons who are themselves parties in the court proceedings, nor would it be limited to defamation proceedings. But we emphasise this power would be available only to the House as a whole and only when there is no question of the member or other person making the statement being exposed in consequence to a risk of legal liability.
74. The latter limitation is important. The Joint Committee was not attracted by the House having an unlimited power of waiver of the article 9 legal immunity. An unlimited power of waiver would mean that when a member speaks he could never be sure that afterwards he might not find himself exposed to legal challenge. That would be inhibiting, and would undermine the freedom that article 9 currently protects. But none of this would arise with a power circumscribed as suggested. Within such limitation the House would retain a discretion and could withhold waiver when waiver would lead to an unacceptable degree of intrusion or when for some other reason waiver was considered undesirable in a particular case. As a decision might not always be straightforward, both Houses would no doubt wish to refer any waiver application for consideration by an appropriate committee, which might also state terms on which any waiver should be given by the House.
75. We recognise that this proposal is subject to one of the disadvantages inherent in the existing section 13. The examination of parliamentary proceedings in court in a libel (or other) action might lead to conflicting decisions of Parliament and the courts about a member's conduct. Lord Simon of Glaisdale expressed the view that the `most serious of all' the objections to section 13 is the scope it creates for a collision between the judiciary and Parliament. This concern was not widely expressed when section 13 was enacted, nor was it one of the criticisms of the section which featured prominently in the evidence we took. We doubt whether in practice there will often be a risk of conflict. In most cases reference to parliamentary proceedings is likely to be subsidiary to the issues before the court. Where there appears to be a serious risk of conflict, the committee considering an application for waiver will need to consider carefully whether waiver would be in the public interest.
76. A more forceful criticism at first sight, and one also levelled at section 13, is that the result would be asymmetrical. Members can rely on the article 9 privilege in respect of defamatory remarks made by them in the House, but they (or, as is now being considered, the House) can waive the privilege when it suits them. The Joint Committee considers the answer lies in appreciating that the proposed power of waiver will not create an imbalance. The basic `imbalance' between members and everyone else, the lack of symmetry, is created by article 9 itself. Members are shielded from legal liability for defamatory statements made in the course of parliamentary proceedings. This is an essential concomitant of parliamentary freedom of speech. What the power of waiver will do is enable the House, while still preserving legal immunity, to permit parliamentary proceedings to be examined in court when the House (not the member) considers that justice so requires and that the privilege can be waived without damaging the interests of the House as a whole.
77. In written evidence to the Joint Committee Dr Geoffrey Marshall expressed a preference for going further. Section 13 should be replaced by a general provision which permits the giving of evidence about parliamentary proceedings in all cases that do not involve the direct protective function of article 9. If such a general provision were adopted, the legal immunity principle enshrined in article 9 would remain intact and inviolate, but article 9 would not afford protection beyond this. We believe this would be an undesirable step, a step too far. A provision of universal application limiting the article 9 protection to cases where there is a risk of legal liability would mean that members, although not facing legal liability, could find themselves called to account in court for what they said in Parliament and why they said it. We believe that, in general, this would not be desirable. Legal immunity may be the principal function of article 9 today, but it is not the only purpose. Although the phrase `impeached or questioned' perhaps supports the view that the article 9 prohibition is co-terminous with legal liability, a wider principle is involved here, namely, that members ought not to be called to account in court for their participation in parliamentary proceedings. This is, and should remain, the general rule. The existence of a (circumscribed) power of waiver by the House would not undermine this principle of non-accountability: the House would retain for itself, in each case, the right to decide whether to waive article 9.
78. What we propose will not work if a decision by the House on whether to exercise its power of waiver in a particular case is, or appears to be, influenced by partisan considerations. Consistency of treatment will be important, and the House will need to be seen to be equitable in granting waiver where the applicant is a non-member, such as a witness or a newspaper. The committee appointed to consider applications should include senior members of the House. Given that the committee for privileges in the House of Lords always contains four law lords, we think this will be an appropriate committee in that House. In the House of Commons the existing functions of the standards and privileges committee would not readily accommodate a new role of advice on waiver. We think that in the Commons the decision might best be made by the Speaker, assisted with advice from a small committee. That committee might comprise members such as the leader and the shadow leader of the House, with the Attorney General and one or more representatives of other parties, coupled with a power to co-opt additional members either generally or for a particular case. In order to emphasise that its proceedings are `proceedings in Parliament' and not subject to judicial review, the committee should be formally appointed by the House early in each Parliament.
79. We were not attracted by the suggestion that non-members should also serve on these committees. The committees will be giving advice on how the Lords or Commons should exercise a power relating to one of their fundamental privileges. The committee will be at liberty to take evidence or seek views from others, but its membership would properly be confined to members of the House.
80. In order to promote consistency these committees will need to prescribe ground rules or guidelines, setting out their general approach. For instance, we envisage the general approach will be to waive privilege unless there is good reason for not doing so. The guidelines should give examples of grounds for refusal: for instance, where waiver would mean that the Speaker might find herself having to attend court and be cross-examined on discussions she had with a worried member; or the chairman of the committee of selection might be required to give evidence on discussions between himself and the whips about the membership of a committee; or members or officers of the House might find themselves compelled to give evidence of advice tendered to a member regarding the tabling of a question or an amendment to a bill. A waiver which involved this degree of intrusion into the affairs of the House would only be appropriate in exceptional circumstances.
81. The issue here is one of balancing the disadvantages and finding the least unattractive course. As well as defamation, our proposal would deal with the problem of adducing evidence relating to a proceeding in Parliament in any criminal or civil court. It would also go some way towards resolving the current problem, drawn to our attention by the Clerk of the House of Commons, of enabling a court to examine the proceedings of a committee when determining a contractual dispute involving the corporate officer of the House. We recommend this proposal to each House.
82. If this course is rejected, the only alternative we can see is to return to the position as it was before the enactment of section 13 of the Defamation Act 1996 and, hence, to the injustices section 13 sought to remedy. We believe the effort is worth making. It would be a sorry reflection on Parliament if a way cannot be devised to exclude political, specifically party political, interests from decisions which need to be taken on their merits. We believe that, of the various alternatives, this is the best option, even though there are risks and difficulties.
The general principle
83. We have stated our view that in general members ought not to be accountable in court for what they say or do in the course of proceedings in Parliament. We have proposed some special and limited exceptions: the House should have power to waive article 9, but only where this would not expose the speaker of the words or the doer of the acts to legal liability; and the courts should be able to examine proceedings in Parliament when interpreting an ambiguous statute or judicially reviewing a governmental decision or considering the legal consequences of a governmental decision.
84. At present the scope of `ought not to be questioned' in article 9 remains undefined and unclear. To leave this question unresolved has the disadvantage that the courts may find themselves drawn into having to decide the issue. Parliament may not agree with the courts' solution. This is more likely now than in the past. Mr McKay, then clerk assistant, voiced anxiety to us regarding uncertainties surrounding article 9:
`The nineteenth century cases are full of echoes of Blackstone and the `dignity of the House'. The House of Commons had an unchallenged place in the constitution. The courts were anxious to preserve that. You did not get litigants who picked over words. Nowadays that is the way litigants and the courts behave - quite properly. That is the change. That is the uncertainty. It is a modern uncertainty.'
85. In the last 30 years the judicial tide in England has rolled forward to some extent on the parliamentary foreshore, although not so far as in Australia in the 1980s. The Joint Committee considers that the continuing lack of clarity on such a fundamental constitutional provision is undesirable. It is preferable for Parliament to declare now what is the scope of article 9, rather than risk having to change this constitutional provision in Parliament's favour after an unsatisfactory court decision. Subject to the limited and specific qualifications already made on particular points, and subject to one further point, the Joint Committee recommends that Parliament should confirm, as a general principle, the traditional view of article 9: that it is a blanket prohibition on the examination of parliamentary proceedings in court. The prohibition applies whether or not legal liability would arise.
86. Section 16(3) of the Parliamentary Privilege Act 1987 (Australia), the text of which is set out above, took this approach. No court or tribunal may receive evidence, or permit questions to be asked or submissions made, concerning proceedings in Parliament by way of, or for the purpose of, questioning or relying on the truth, motive, intention or good faith of anything forming part of those proceedings in Parliament or drawing an inference from anything forming part of those proceedings. The Joint Committee recommends the enactment of a statutory provision to this effect. In one respect the Australian statute may go too far. It is difficult to see how there could be any objection to the court taking account of something said or done in Parliament when there is no suggestion that the statement or action was inspired by improper motives or was untrue or misleading and there is no question of legal liability. We recommend that the prohibition be coupled with a proviso to the effect that the court may take such statements or conduct into account. These recommendations would not affect the continuing use of Hansard in court to establish what was said or done as a matter of history.
Recommendations on these matters
87. At the outset of this report we set out one test by which the value of any particular parliamentary privilege should be measured, namely, whether each particular right or immunity currently existing is necessary today, in its present form, for the effective functioning of Parliament. The recommendations we propose confirm the traditional view of the scope of article 9. This is justifiable today as much as formerly: those who participate in parliamentary proceedings should not in consequence find themselves having to account for their conduct in any form of court proceedings. We propose some limited exceptions, none of which compromise or impair the legal immunity of those who participate in parliamentary proceedings. The only instance where the legal immunity principle is breached is ministerial liability for government decisions.
88. The Joint Committee accordingly recommends a statutory enactment to the effect that no court or tribunal may receive evidence, or permit questions to be asked or submissions made, concerning proceedings in Parliament by way of, or for the purpose of, questioning or relying on the truth, motive, intention or good faith of anything forming part of proceedings in Parliament or drawing any inference from anything forming part of those proceedings.
89. We recommend that the mischief sought to be remedied by section 13 of the Defamation Act 1996 should be cured by a different means. Section 13 should be replaced by a short statutory provision empowering each House to waive article 9 for the purpose of any court proceedings, whether relating to defamation or to any other matter, where the words spoken or the acts done in proceedings in Parliament would not expose the speaker of the words or the doer of the acts to any legal liability. Each House will need to consider appropriate machinery once the section has been repealed.
90. The Joint Committee considers it would be sensible to recognise the Pepper v Hart principle and the use of parliamentary proceedings in court actions concerned with the judicial review of governmental decisions or the consequences of governmental decisions. We recommend the enactment of a short statutory provision to the effect that nothing in article 9 shall prevent proceedings in Parliament being examined in any court proceedings so far as they relate to the interpretation of an Act of Parliament or subordinate legislation or to the judicial review of, or the consequences of, governmental decisions, or where there is no suggestion that anything forming part of proceedings in Parliament was inspired by improper motives or was untrue or misleading and there is no question of legal liability.
`Place out of Parliament'
91. The prohibition in article 9 is not confined to the questioning of parliamentary proceedings in courts. It applies also to any `place out of Parliament'. This is another obscure expression of uncertain meaning. To read the phrase as meaning literally anywhere outside Parliament would be absurd. It would prevent the public and the media from freely discussing and criticising proceedings in Parliament. That cannot be right, and this meaning has never been suggested. Freedom for the public and the media to discuss parliamentary proceedings outside Parliament is as essential to a healthy democracy as the freedom of members to discuss what they choose within Parliament. So the embrace of the phrase is narrower than this, but wider than merely `courts': the whole phrase is `. . .any court or place out of Parliament'.
92. The interpretation of this expression has never been the subject of a court decision. The point has arisen in the context of tribunals of inquiry set up under the Tribunals of Inquiry (Evidence) Act 1921 where both Houses of Parliament resolve `that it is expedient that a tribunal be established for inquiring into a definite matter (specified in the resolution) of urgent public importance'. These tribunals have the same powers as a court, in particular for enforcing the attendance of witnesses, examining them on oath, and compelling the production of documents. Their purpose is described by a recognised constitutional authority as `to investigate certain allegations or events with a view to producing an authoritative or impartial account of the facts, attributing responsibility or blame where it is necessary to do so'. The 1921 Act was passed in the shadow of the Marconi affair and the controversy over what was widely regarded as an unsatisfactory parliamentary inquiry.
93. It seems likely that a court would decide that a tribunal appointed under the 1921 Act is sufficiently similar to a court to constitute a place out of Parliament for the purposes of article 9 and, accordingly, that such a tribunal would be precluded from examining proceedings in Parliament. This conclusion means that an inquiry cannot be set up under the 1921 Act if its purpose is to look into parliamentary matters which may involve examining proceedings in Parliament. Article 9 would bar the tribunal from conducting any such examination. Thus, as matters stand, where proceedings in Parliament may need to be examined, a non-statutory body, lacking the advantages afforded by the 1921 Act, has to be appointed. A recent instance of such a non-statutory tribunal was Sir Richard Scott's inquiry into `arms for Iraq'.
94. This is not satisfactory. Since Parliament already controls the appointment of such a tribunal, we see advantage in the two Houses having a statutory power to waive article 9 in the resolution of appointment.
95. A statutory power of waiver assumes that article 9 does, or may, apply to 1921 Act tribunals. The Joint Committee considers it would also be advantageous to dispel the uncertainties with a statutory definition. The Parliamentary Privileges Act 1987 (Australia) applied the article 9 prohibition to any court or tribunal, and defined tribunal as any person or body having power to examine witnesses on oath. This seems to provide a clear and sensible basis for the future. In general, power to administer oaths is dependent upon statutory authority. The power is conferred on bodies whose proceedings are endowed with a degree of legal solemnity and formality. It means, for instance, that article 9 will apply to coroners' inquests, lands tribunals and industrial tribunals. Beyond such formal tribunals, article 9 will not apply. By this means the boundary can be clearly delineated, with an embargo on examination of parliamentary proceedings in all courts and similar bodies but not elsewhere.
96. The Joint Committee recommends a statutory enactment to the effect that `place out of Parliament' means any tribunal having power to examine witnesses on oath, coupled with a provision that article 9 shall not apply to a tribunal appointed under the Tribunals of Inquiry (Evidence) Act 1921 when both Houses so resolve at the time the tribunal is established.
`Proceedings in Parliament'
97. Since article 9 confers absolute immunity against civil and criminal liability in respect of `proceedings in Parliament', it is important for members and for the public to know what activities are covered by the phrase. Unfortunately, this is a further aspect of article 9 calling for elucidation. No comprehensive definition has been determined either by Parliament or by judicial decision. In 1689, when parliamentary proceedings were much simpler, a definition may have been thought unnecessary. But this is not so when the phrase is applied to present day parliamentary activities and members' activities. In several respects the scope of this expression is not clear today. As noted earlier in this report, this has been recognised as unsatisfactory time and again by successive committees over the last 30 years.
98. The broad description in Erskine May is a useful starting place:
`The primary meaning of proceedings, as a technical parliamentary term, . . . is some formal action, usually a decision, taken by the House in its collective capacity. This is naturally extended to the forms of business in which the House takes action, and the whole process, the principal part of which is debate, by which it reaches a decision. An individual member takes part in a proceeding usually by speech, but also by various recognised forms of formal action, such as voting, giving notice of a motion, or presenting a petition or report from a committee, most of such actions being time-saving substitutes for speaking. Officers of the House take part in its proceedings principally by carrying out its orders, general or particular. Strangers also may take part in the proceedings of a House, for example by giving evidence before it or one of its committees, or by securing presentation of a petition.'
99. Thus the House of Commons select committee on the Official Secrets Acts (1939) considered that proceedings in Parliament include `everything said or done by a member in the exercise of his functions as a member in a committee of either House, as well as everything said or done in either House in the transaction of parliamentary business'. This is so even if the acts occur outside the precincts of the Palace of Westminster. For example, select committees sometimes meet elsewhere.
100. The position regarding certain activities is reasonably clear. In this category are debates (expressly mentioned in article 9), motions, proceedings on bills, votes, parliamentary questions, proceedings within committees formally appointed by either House, proceedings within sub-committees of such committees, and public petitions, once presented. These are all proceedings in Parliament. Statements made and documents produced in the course of these proceedings, and notices of these proceedings, all appear to be covered. So are internal House or committee papers of an official nature directly related to the proceedings, and communications arising directly out of such proceedings, as where a member seeks further information in the course of proceedings and another member agrees to provide it. So too are the steps taken in carrying out an order of either House.
101. On the other hand, certain activities of members are not protected, even though they may take place within the House or a committee. A casual conversation between members in either House even during a debate is not protected, nor an assault by one member on another. In 1947 a member of the House of Commons sued a newspaper for defamation because it claimed she had `danced a jig on the floor of the House of Commons' during a division on a bill. Motions were agreed permitting members to attend the trial and give evidence both for and against the member on what had occurred in the Chamber.
102. Repetition, even verbatim repetition, by a member of what he said during proceedings has no protection elsewhere under article 9. Nor does article 9 cover proceedings of committees not appointed or nominated by either House, such as backbench and party committees, or the Ecclesiastical Committee which is a statutory committee. The status of the proceedings of the House of Commons Commission, which also is a statutory body, is discussed further below.
103. One important area of uncertainty is members' correspondence. There has been long-standing concern about correspondence and other communications undertaken on behalf of constituents by members of the House of Commons. Members of both Houses now engage in many different activities in discharging their parliamentary duties. As well as speaking in debates, participating on committees and asking parliamentary questions, they write letters and make representations to ministers, government agencies and a wide variety of bodies, both public and private. Constituents of members of the House of Commons expect their members to take up their concerns at local and at national level. In recent years members' work has been transformed by a very substantial increase in this type of constituency correspondence. Most of these activities are not protected by parliamentary privilege. Article 9 protects parliamentary proceedings: activities which are recognisably part of the formal collegiate activities of Parliament. Much of the work of a member of Parliament today, although part of his duties as a member of Parliament, does not fall within this description.
104. This issue arose in 1958 in a case concerning a member, Mr George Strauss. He wrote an allegedly defamatory letter to a minister on a matter he might later have wished to raise in the House, namely, criticism of the purchasing policies of the London Electricity Board. The House resolved by a narrow majority that the letter was not a proceeding in Parliament as it did not relate to anything then before the House.
105. Both the 1967 House of Commons committee on parliamentary privilege and its 1977 committee of privileges, as well as the 1970 joint committee on publication of proceedings in Parliament, considered the House's decision was right in law. But all agreed that the argument in favour of correspondence with ministers having the benefit of absolute privilege in defamation actions was so compelling that the law should be changed. The 1977 committee considered it was anomalous for a member's communications with the parliamentary commissioner for administration to enjoy absolute privilege under the Parliamentary Commissioner Act 1967 while his communications with a minister did not. The 1970 joint committee's proposed statutory definition of `proceedings' included:
`all things said, done or written between members or between members and officers of either House of Parliament or between members and ministers of the Crown for the purpose of enabling any member or any such officer to carry out his functions as such . . ..' (our italics).
106. There is force in the view that proceedings in Parliament should include letters to ministers raising matters which could equally well be pursued by parliamentary question and thus be absolutely privileged. The parliamentary question developed as a device for raising specific matters capable of being answered shortly and without the need for debate. The 1967 committee commented:
`Many members now use the parliamentary question as a weapon of penultimate resort to give publicity to its subject-matter when, and only when, they cannot obtain satisfaction by correspondence; yet the House has taken the view that such correspondence does not fall within the ambit of `proceedings in Parliament' . . .. The practical effect of this distinction seems to Your Committee to be indefensible'.
To some extent the distinction has recently been blurred further, now that a question to a minister may elicit a reply in the form of a letter from the head of the executive agency more directly concerned. Even if not `proceedings', such replies, when published in the official report, are protected by the absolute privilege afforded by the Parliamentary Papers Act 1840.
107. An extension of absolute privilege to members' correspondence with ministers would therefore seem logical. But on closer examination it would create problems of principle. Why distinguish between a member's letter to a minister and a member's letter to a public official or a local authority? Should a constituent's correspondence accompanying a member's letter be considered part of a `proceeding'? Should a member's reply to the constituent have the same privilege? When a matter is raised in debate in the House a member may be subject to challenge from other members. Parliamentary questions should be short and to the point, and are subject to rules of order. Letters can be extensive, and if absolutely privileged under article 9 might be used as a means of publishing with impunity defamatory statements or trade secrets. With modern photocopying facilities and e-mail, many people can easily see copies of letters, sometimes inadvertently. One reason why letters to ministers have increased appreciably is the rise in the number of constituency cases ill-suited to proceed by way of written questions, because they are too detailed or for some other reason. If parliamentary privilege were extended to members' correspondence, Parliament would probably become involved in attempting to make rules for correspondence, both constituency correspondence and generally, as it has for questions and other proceedings. The comparison drawn by the 1977 committee is not convincing. Correspondence with the parliamentary commissioner for administration consists mainly of complaints of maladministration by constituents, forwarded by members for investigation by the commissioner under statutory powers. By their nature these complaints may be defamatory, and exposure to defamation actions would unduly obstruct the commissioner's investigations.
108. It remains the case that the distinction between a member's letter and a member's speech or parliamentary question can be somewhat arbitrary. A letter may relate to the same subject matter as an existing proceeding, and may simply be for the member a more convenient or sensible way of pursuing the same objective. It is anomalous that a member who, for example, received information that children were being abused in a named institution, would have the benefit of article 9 if he tabled a question but not if he wrote to the responsible minister first. But the boundary of privilege has to be drawn somewhere, and the present boundary is clear and defensible. Moreover, although members taking up difficult constituency cases often receive threatening letters from solicitors, cases in court are rare. Professor Bradley summed up the position in evidence:
`There was a strong case for [absolute privilege] in 1957 at the time of the Strauss case. . . . That strong case is still there. However, we have had the last 40 years in which the qualified privilege of common law seems to have enabled members of both Houses to carry out their functions satisfactorily'.
109. This practical consideration has weighed heavily with the Joint Committee, coupled with the absence of any defensible line between constituency correspondence with a minister and constituency correspondence with others.
110. There is another consideration. Article 9 provides an altogether exceptional degree of protection, as discussed above. In principle this exceptional protection should remain confined to the core activities of Parliament, unless a pressing need is shown for an extension. There is insufficient evidence of difficulty, at least at present, to justify so substantial an increase in the amount of parliamentary material protected by absolute privilege. Members are not in the position that, lacking the absolute immunity given by article 9, they are bereft of all legal protection. In the ordinary course a member enjoys qualified privilege at law in respect of his constituency correspondence. In evidence the Lord Chief Justice of England, Lord Bingham of Cornhill, and the Lord President of the Court of Session, Lord Rodger of Earlsferry, both stressed the development of qualified privilege at law and the degree of protection it provides nowadays to those acting in an official capacity and without malice. So long as the member handles a complaint in an appropriate way, he is not at risk of being held liable for any defamatory statements in the correspondence. Qualified privilege means a member has a good defence to defamation proceedings so long as he acted without malice, that is, without some dishonest or improper motive.
111. Admittedly, qualified privilege is less effective than the sweeping, absolute protection afforded by article 9, in two respects. Article 9 provides a defence not only to defamation claims but also to any claim that by sending the constituent's letter to the minister the member committed an offence under the Official Secrets Acts or a breach of a court order. Secondly, defamation proceedings brought contrary to article 9 will generally be dismissed peremptorily, without any need for a trial, as it will be obvious from the outset that they are bound to fail. With a defence of qualified privilege, if there is sufficient prima facie evidence of malice the case will ordinarily proceed to trial for a verdict by the jury. So a member may be put to the inconvenience and expense of defending an action before he is vindicated.
112. Constituency correspondence has burgeoned over the last 30 years, but since Strauss there have been remarkably few, if any, instances of defamation actions against members who were acting on behalf of their constituents. We recommend that the absolute privilege accorded by article 9 to proceedings in Parliament should not be extended to include communications between members and ministers.
Members' drafts and notes
113. Drafts and notes frequently precede speeches and questions, and members often need assistance and advice in preparing them. By necessary extension, immunity accorded to a speech or question must also be available for preparatory drafts and notes, provided these do not circulate more widely than is reasonable for the member to obtain assistance and advice, for instance from a research assistant. It would be absurd to protect a speech but not the necessary preparatory material. The same principle must apply to drafts of evidence given by witnesses. This principle must also apply to drafts of speeches, questions and the like which in the event are not used. A member cannot always catch the Speaker's eye, or he may change his mind.
114. This approach accords with the view expressed by the select committee of the House of Commons on the Official Secrets Acts (1939). The appointment of this committee arose out of the action taken by a member, Mr Duncan Sandys, in threatening to table a question regarding the inadequacy of London's anti-aircraft defences. The draft question included information, classified as secret, about the number of available guns and their state of readiness. Mr Sandys sent the draft to the minister. In its report the committee said there were some:
`communications between one member and another, or between a member and a minister, so closely related to some matter pending in, or expected to be brought before the House, that though they do not take place in the chamber or a committee room they form part of the business of the House, as, for example, where a member sends to a minister the draft of a question he is thinking of putting down or shows it to another member with a view to obtaining advice as to the propriety of putting it down or as to the manner in which it should be framed'.
The House agreed with this conclusion.
Assistance by House staff
115. Memoranda from the Librarian of the House of Commons and from the staff side of the Whitley Council showed there is concern over the degree of legal protection afforded to the research briefs, notes and other advice prepared by staff for members. The House of Commons library, in particular, often briefs members on the background to complicated and sensitive constituency cases, as well as providing research for contributions in the House or in committee. It is our intention that all material directly related to proceedings in Parliament should be protected by article 9, such as preparatory material related to a member's participation in debate or in committee. Material which has no direct connection with proceedings in Parliament is not protected.
116. To extend absolute privilege to all research work carried out for members by House staff would raise difficulties similar to those applying to members' correspondence. Here also the degree of protection provided by qualified privilege at law should not be underestimated. We doubt whether any advice given by the libraries and other departments to members of either House could exhibit any credible sign of malice. However, we are concerned at the extent to which members unthinkingly include in envelopes to constituents, for example, a brief prepared by a member of staff which, to assist the member, may have been written in frank terms. This has caused difficulties for some staff, and occasionally a threat of litigation. Members should make use of the advice, not disseminate it.
117. There is another category of material which does not fall within article 9 but can nonetheless claim to be within Parliament's right to control its own affairs (exclusive cognisance) and therefore protected under that heading. This comprises work done in providing services under the direction of the House or its presiding officer. Examples are arrangements made by Black Rod and the Serjeant-at-Arms for the security and proper functioning of the two Houses, and action taken by either House to implement decisions of the Speaker or relevant committee on, for instance, the use of committee rooms, or the rules governing parliamentary groups.
Assistance by personal staff
118. Members frequently employ personal staff and research assistants of their own to assist with their parliamentary duties. The material produced for members by their staff and assistants may sometimes be protected by parliamentary privilege, as material directly related to proceedings in Parliament. But, as with House staff, other material enjoys no parliamentary privilege.
Registration of members' interests
119. Another area of uncertainty concerns registration of members' interests. Both Houses have procedures for registration of members' personal pecuniary interests. These procedures are part of the machinery brought into being by each House for the better conduct of its business. They are under the sole control of each House and not subject to supervision by courts of law. We consider these procedures also qualify, or should qualify, for the protection afforded by article 9 to proceedings in Parliament.
120. This applies as much to the registers themselves as to the steps leading up to registration. The registers, which are an integral part of the procedures, are open to inspection by the public, and in both Houses the register is published annually. Moreover, any member of the public may complain to the parliamentary commissioner for standards or, in the case of the House of Lords the sub-committee on Lords' interests, that a member has not properly registered his interests. None of these characteristics deprives the registers of their status as a proceeding in Parliament. Publication of a speech in the official report does not deprive a speech in the House of the protection of article 9. Nor does the ability of the public to lodge complaints that a member is in breach of the code of conduct deprive disciplinary proceedings of their status as parliamentary proceedings.
121. This appraisal of the status of the registers of interests, if correct, does not prevent the registers from ever being referred to or used in court proceedings. As with Hansard, so with the register, the fact that an entry exists or does not exist could be established and used in court, as noted above. However, the status of the registers as part of a proceeding in Parliament prevents a member from being examined in court on his reasons for registering or not registering his interest. It is not open to a court to adjudicate upon whether a member should have registered a particular interest, or to draw an adverse inference from his failure to do so. In like fashion, the court may not adjudicate upon whether a member was at fault in failing to declare an interest in a debate or proceeding in the House or a committee.
122. In this regard the court decision in Rost v Edwards is a cause for concern. In 1989 Mr Peter Rost, a member of Parliament, sued the writer of an article in The Guardian newspaper for libel in asserting that Mr Rost had been seeking to sell confidential information obtained by him as a member of the House of Commons select committee on energy. As part of a defence of justification, the defendants asserted that Mr Rost should have registered his parliamentary consultancies. In response Mr Rost wished to establish, by reference to the published rules and to Erskine May, the requirements laid down by the House for the registration of pecuniary interests, and to call evidence on the nature of his consultancies and the reason why he had not registered them. The Solicitor General submitted that the House of Commons register of members' interests and the related practice and procedure formed part of the proceedings of Parliament. The trial judge rejected this submission, and held that registration of members' interests is not a proceeding in Parliament.
123. It would not be appropriate for us to venture a view on the correctness of this decision as a matter of law. But we are in no doubt that, if this decision is correct, the law should be changed. As the law now stands, it is open to a court to investigate and adjudicate upon an alleged wrongful failure to register. That ought to be a matter for Parliament alone, in the same way as any other alleged breach of its rules is a matter for Parliament alone. We recommend that legislation should make clear that keeping the registers (and hence the registers themselves) are proceedings in Parliament.
124. The House of Commons also maintains three other registers which have been open for public inspection since autumn 1998. These relate to the relevant pecuniary interests of parliamentary journalists, members' staff and all-party and parliamentary groups. The recommended clarification should apply also to these and any other register of interests prescribed by resolution of either House.
Complaints against members
125. Complaints relating to the conduct of members, whether from other members or the public, are made in the House of Commons to the parliamentary commissioner for standards and in the House of Lords to the committee for privileges (which refers them to its sub-committee on Lords' interests). The commissioner is an officer of the House of Commons, and her duties include receiving and, if she thinks fit, investigating complaints, and reporting to the committee on standards and privileges. Investigation and adjudication of complaints fall squarely within the concept of proceedings in Parliament. Since Rost v Edwards the House of Commons has agreed to a written code of conduct on the basis of which any complaint against a member of that House will be judged. We consider this code forms part of proceedings in Parliament and, like the register of members' interests, should not be questioned in the courts.
126. The only area of doubt concerning complaints relates to the status of a complaint the commissioner declines to take up on the ground that it is frivolous, for example, because its only basis is an unsubstantiated newspaper story or television report. We are of the view that, once taken up for investigation, a complaint partakes of the nature of a parliamentary proceeding: it becomes part of that proceeding, along with any correspondence which then takes place and any oral evidence which is produced. Until then, a complaint cannot be regarded as part of a parliamentary proceeding or entitled to the absolute immunity that accompanies those proceedings. We recommend this should be made clear in any statutory definition of parliamentary proceedings.
A statutory definition
127. A statutory definition of proceedings in Parliament will not solve all problems, but it will remove some areas of confusion. We recommend that the uncertainty in these areas should be ended without further delay. Section 13(4) of the Defamation Act 1996 contained a partial definition for a specific purpose. Australia has enacted a definition in section 16(2) of the Parliamentary Privileges Act 1987 (Australia). Annex B to this report sets out the definition recommended by the 1970 joint committee and the definition enacted in Australia. Annex A contains the definition used in the Defamation Act 1996.
128. The 1970 joint committee's recommendation has been endorsed by other select committees, but it is open to criticism. Although long and detailed, it is still not an exhaustive definition: it is expressed to be without prejudice to the generality of the expression `proceedings in Parliament', and gives no guidance on what that expression is broadly aimed at. Paragraph 1(b) is expressed in wide terms which are intended to include members' correspondence with ministers. The definition in the Parliamentary Privileges Act 1987 (Australia) provides a better model. It is more concise, and gives a broad overall definition of proceedings in Parliament as `all words spoken and acts done in the course of, or for purposes of or incidental to, the transacting of the business of a House or of a committee'. The key expression `business of a House' is left undefined but is still useful in conveying the distinction between the collegiate work of the House and the work of individual members, such as constituency correspondence. The phrase `or incidental to' might read better as `or necessarily incidental to'. Otherwise it may be too loose.
129. The Joint Committee recommends the enactment of a definition on the following lines:
`(1) For the purposes of article 9 of the Bill of Rights 1689 `proceedings in Parliament' means all words spoken and acts done in the course of, or for the purposes of, or necessarily incidental to, transacting the business of either House of Parliament or of a committee.
(2) Without limiting (1), this includes:
(a) the giving of evidence before a House or a committee or an officer appointed by a House to receive such evidence
(b) the presentation or submission of a document to a House or a committee or an officer appointed by a House to receive it, once the document is accepted
(c) the preparation of a document for the purposes of transacting the business of a House or a committee, provided any drafts, notes, advice or the like are not circulated more widely than is reasonable for the purposes of preparation
(d) the formulation, making or publication of a document by a House or a committee
(e) the maintenance of any register of the interests of the members of a House and any other register of interests prescribed by resolution of a House.
(3) A `committee' means a committee appointed by either House or a joint committee appointed by both Houses of Parliament and includes a sub-committee.
(4) A document includes any disc, tape or device in which data are embodied so as to be capable of being reproduced therefrom.'
Disputes on the application of article 9
130. Regardless of whether a definition of proceedings in Parliament is enacted, disputes will continue to arise in the course of court proceedings over the availability of article 9 as a defence in the circumstances of the case. A simple instance is where a member is being sued for defamation. In his defence he claims he wrote the libellous letter in the course of proceedings in Parliament. The plaintiff disputes this. The judge hearing the case is then called upon to decide this issue.
131. The Joint Committee has considered whether, when this type of dispute arises over the application of article 9, the issue should continue to be resolved by the court or, instead, there should be some other and possibly more expert forum, but independent of Parliament. The Lord Chief Justice of England suggested as a possibility that if Parliament felt such a mechanism were required the judicial committee of the Privy Council might be suitable. Its decisions would be those of a court, and over time it would build up a body of precedents. Indeed, references to the Privy Council in this field have already occurred. In the 1950s during its consideration of the Strauss case the House of Commons referred an issue involving the interpretation of the Parliamentary Privilege Act 1770 to the judicial committee.
132. The Joint Committee explored this suggestion with other witnesses, and also considered the possibility of references to a body based on the judicial committee but whose membership would include former members of the House of Commons. We do not recommend these proposals. Our view is that the mechanism of a reference to the judicial committee is probably better suited to giving the House advice on issues involving points of principle. When issues of principle arise, either House may seek such advice by a simple resolution, on the basis of which the Crown will make a reference to the judicial committee of the Privy Council under the Judicial Committee Act 1833. To introduce an additional stage into a libel action, for instance, when there may be no point of principle at stake, would cause delay and expense for the parties. Moreover, it might not be the best course to remove the jurisdiction from the trial judge who, unlike the judicial committee, would be in possession of all the facts. If a point of principle were to arise in court proceedings the judge would always be able to turn to the Attorney General and invite his assistance on issues concerning article 9. If a point were taken by either House of Parliament rather than the court, the House could seek to intervene in the case, either by briefing counsel directly or via the Attorney General.
Scotland and Northern Ireland
133. The Scottish `Claim of Right' of 1689 contained a provision narrower in scope than article 9. It provided that `for redress of all grievances, and for the amending, strengthening and preserving of the laws, parliaments ought to be frequently called and allowed to sit and the freedom of speech and debate secured to the members'. The Bill of Rights was not enacted in any part of Ireland, although the Irish Parliament prior to the Union assumed similar privileges to the Parliament of Great Britain, and the Northern Ireland Parliament enjoyed the same privileges as Westminster by virtue of the Government of Ireland Act 1920.
134. Doubts have been raised on whether a law passed for England and Wales in 1689 would apply in other parts of the United Kingdom. Despite the absence of case law, both the Lord President of the Court of Session and the Lord Chief Justice of Northern Ireland, Sir Robert Carswell, were convinced that the law would be interpreted in Scotland and Northern Ireland so as to reflect closely the interpretations placed upon parliamentary privilege by the English courts, even though the interpretation in every case might not be precisely the same. Although an element of doubt must remain, the Joint Committee has proceeded throughout this report on the basis that the privileges of the United Kingdom Parliament will be interpreted and applied in a similar fashion throughout the United Kingdom. Nevertheless, if there were to be legislation on privilege, we recommend that the extent of freedom of speech of the United Kingdom Parliament in the laws of Scotland and Northern Ireland should be expressly harmonised with the law of England and Wales. The opportunity should also be taken to declare that the other existing rights and immunities accorded under the law of England and Wales to the two Houses, their members and officers are likewise applicable throughout the United Kingdom.
94 United Kingdom cases are briefly described in Erskine May, 22nd ed (1997), chapters 6 to 11. However, there are many Commonwealth cases. For Canada, Australia and New Zealand, see the memoranda from the Parliaments of those countries printed in vol 3 to this report. A recent specialist account is Parliamentary Privilege in Canada, 2nd ed (1997), by J P Joseph Maingot QC, which contains references to many Commonwealth cases in addition to those of Canada. Back
95 See Geoffrey Lock, `The 1689 Bill of Rights', Political Studies XXXVII, December 1989, pp 540-561. Back
96 Writing of parliamentary privilege in (probably) the late 1640s, the Earl of Clarendon (who as Edward Hyde had been a member of both the Short and Long Parliaments) described the privilege as he believed it to have properly existed before the civil war as follows. `If a man brings an information, or an action of the case, for words spoken by me, and I plead, that the words were spoken by me in Parliament, when I was a Member there; and that it is against the privilege of Parliament, that I should be impleaded in any other place, for the words I spake there, I ought to be discharged from this action or information, because this privilege is known, and pleadable at law: but that judge can neither punish nor examine the breach of privilege, nor censure the contempt. And this is the true and proper meaning of the old received axiom, that they are judges only of their own privileges' (Clarendon, A True Narration of the Rebellion and Civil Wars in England, Book IV). Back
97 e.g. Eliot's Case (1629) 3 St.Tr. 294; and Pepper v Hart AC 593. Back
98 Church of Scientology of California v Johnson-Smith 1 QB 522. Back
99 e.g. Goffin v Donnelly (1881) 6 QBD 307. Back
100 These words are to be found in section 13(4) of the Defamation Act 1996. Back
101 e.g. Dr Geoffrey Marshall FBA in his memorandum and oral evidence, vol 2, p 204, QQ 774-776; and Sir John Laws (Mr Justice Laws) in Law and Democracy (1955), pp 72, 76. Back
102 So called because TC Hansard was first the printer, and later the publisher, of the official series of parliamentary debates covering both Houses, inaugurated by William Cobbett in 1803: Erskine May, 22nd ed (1997), p 220. Back
103 Except for evidence on what the House had done taken from the Journals of the House. References occur at least as early as 1695, and regularly thereafter. Under section 3 of the Evidence Act 1845 (which does not extend to Scotland), the Journals were to be admitted as evidence by the courts without formal proof being given of their accuracy. The Journals of the House of Lords have always been held to be public records. Back
104 CJ (1980-81) 828 (31 October 1981); Committee of Privileges, First Report, HC (1978-79) 102. Back
105 Commentaries on the Laws of England (4 vols 1765-69), vol 1, 163; citing Coke, Institutes of the Laws of England (4 Vols 1628-44) vol 4, 15. Back
106 Pepper (Inspector of Taxes) v Hart AC 593. For an account of the background see Erskine May, 22nd ed (1997), p 91; Francis Bennion Statutory Interpretation, 2nd ed (1993), Supplement. Pepper v Hart changed a very old constitutional practice. Thus Miller v Taylor (1769) 4 Burr 2303 at 2332: `the sense and meaning of an Act of Parliament . . . must be collected from what it says . . . not from the history of changes it underwent in the House where it took its rise. That history is not known to the other House or to the Sovereign'. Similarly, in Edinburgh and Dalkeith Railway Co. v Wauchope (1842) 8 Cl & Fin 710 at 723-724: `no court of justice can inquire into the mode in which it was introduced to Parliament, nor what was done previous to its introduction, or what passed in Parliament during the various stages of its progress through both Houses.' These and other cases are quoted in Geoffrey Marshall `Hansard and the interpretation of statutes' in The Law and Parliament ed D Oliver and G Drewry (Butterworths 1998). Back
107 Standing Committee Proceedings; Standing Committee E, 17 and 22 June 1976. Back
108 Pepper v Hart AC 593. Back
109 Section 15 AB of the Acts Interpretation Amendment Act 1901 (Australia) as amended in 1984 provides that if any material not forming part of the Act is capable of assisting the ascertainment of the meaning of the provision of the Act, consideration may be given to that material to confirm its meaning, or to determine the meaning when the provision is ambiguous or obscure or where the ordinary meaning leads to results that are manifestly absurd or unreasonable. See too, Historic House Hansard, 3 April 1984, pp 1267 ff, 3 May 1984, pp 1746-1797. A similar approach has been adopted in New Zealand in cases such as Howley v Lawrence Publishing Co. Ltd 1 NZLR 404; and New Zealand Maori Council v Attorney General 1 NZLR 614. See Philip A. Joseph, Constitutional and Administrative Law in New Zealand (1993), pp 371-2. Back
110 AC 66. The Equal Pay (Amendment) Regulations 1983 introduced a new section into the Equal Pay Act 1970 to comply with the decision of the European Court of Justice in EC Commission v United Kingdom ICR 578. Back
111 Vol 2, p 124. A description by Professor Anthony Bradley of the development of judicial review and its relationship with parliamentary proceedings is to be found partly in the memorandum accompanying his oral evidence (vol 2, pp 122-127) but principally in a separate memorandum by him published in vol 3, pp 145-150. Back
112 In re Findlay AC 318; Pierson v Home Secretary 3 AER 577; R v Home Secretary, ex parte Venables AC 407; and R v Home Secretary, ex parte Hindley QB 751. Back
113 R v Home Secretary, ex parte Brind 1 AC 696. Back
114 R v Secretary of State for Foreign Affairs, ex parte World Development Movement 1 WLR 386. Back
115 R v Secretary of State for the Home Department, ex parte Fire Brigades Union 2 AC 513. Back
116 Letter to the Joint Committee's chairman from Lord Woolf, Master of the Rolls, vol 3, p 151. Back
117 Memorandum by Professor Anthony Bradley, vol 3, p 149, paragraph 17. Back
118 Letter from Lord Woolf, Master of the Rolls, vol 3, p 151. Back
119 R v Foreign Secretary ex parte Rees-Mogg QB 552, 561. Back
120 HC Deb 21 July 1993, cc 353-54. Back
121 Constructive dismissal occurs when an employee terminates his contract of employment but the employer's conduct entitles the employee to claim he was dismissed. Back
122 The case of the late Mr John Marriott, former governor of Parkhurst prison, was drawn to our attention by Dr Peter Brand MP. Back
123 Q 785; see too, memorandum from Liberty, vol 3, p 53, paragraph 31. Back
124 Hamilton v Hencke; Greer v Hencke (21 July 1995). At about the same time an unrelated action by another member of Parliament, Mr Rupert Allason, was stayed for a similar reason: Allason v Haines (14 July 1995). Back
125 News Media Ownership v Finlay NZLR 1089. Back
126 Wright and Advertiser Newspapers Ltd v Lewis (1990) 53 SASR 416. Back
127 1 AC 321. A former New Zealand government minister alleged that he had been defamed in a television broadcast. The TV company sought to prove the allegations by relying on statements and actions made outside Parliament and also in the House of Representatives. The judge struck out the allegations which he held might impeach or question proceedings in Parliament, in contravention of article 9. The court of appeal upheld the decision but ordered the plaintiff's action to be stayed unless and until privilege was waived by the House and the individual members concerned. The House privileges committee decided that the House had no power to waive privilege. The proceedings were permitted to continue because the privileged material was comparatively marginal and there could still be a fair trial. The judicial committee's decision supported the court of first instance. See judgment of Smellie J in the High Court of New Zealand, A 785/90 (24 June 1992); judgment in the New Zealand Court of Appeal CA 161/92 (2-5 November 1992). Back
128 R v Foord and R v Murphy: see the judgment of Hunt J reported in 64 ALR 498. Back
129 The section was inserted by the Lords as an amendment to the Defamation Bill [Lords] which it considered in March, April and May 1996. The Commons debated the provision in May and June 1996 and agreed to it. See HL Deb, 8 March 1996 (Second Reading); 2 April 1996 (Committee of the Whole House); 16 April 1996 (Report); and 7 May 1996 (Third Reading). See also HC Deb, 21 May 1996 (Second Reading); 13 June 1996 (Standing Committee A); and 24 June 1996 (Report). Back
130 e.g. QQ 376-381, 498-502, 577, 785; and memoranda by The Lord Chief Justice of England, vol 2, p 110; the former Parliamentary Commissioner for Standards, vol 2, p 219; Dr Geoffrey Marshall, vol 2, p 204; the Guild of Editors, vol 3, p 16; the Newspaper Society, vol 3, p 18; and The News of the World, vol 3, p 45. See too `A Question of Privilege: The crisis of the Bill of Rights', by Lord Simon of Glaisdale in The Parliamentarian, April 1997. Back
131 The Commons has recognised that there could be extremely serious cases which involved improper obstruction of the functions of Parliament and serious reflections on the occupants of the Chair, where the House might wish to use its penal powers: see HC (1967-68) 34, paragraphs 42-45, and Committee of Privileges, Third Report, HC (1976-77) 417, paragraphs 5-6. Back
132 1 AC 336 (see paragraph 65 above). Back
133 The Parliamentarian, April 1997. Back
134 Memorandum, paragraph 5, vol 2, p 204. Back
135 For exceptions where ministerial decisions are involved, see paragraphs 46-59. Back
136 See memorandum by The News of the World, vol 3, p 45. Back
137 Q 52; and paragraph 13 of his memorandum, vol 2, p 4. See also paragraphs 246, 252-259 below. Back
138 See paragraph 40 above. Back
139 Q 44. Back
140 Paragraphs 45, 55, 59 and 73 above. Back
141 Paragraphs 86 below. Back
142 Paragraph 66. Back
143 The position in Australia is not wholly clear. Judicial review proceedings were not expressly excepted from section 16 of the Parliamentary Privileges Act 1987 (Australia), and the interpretation of section 16 has not been considered by the High Court. The view of the Australian Attorney General, the Hon. Daryl Williams AM QC MP, is that the Act has not proved inhibiting to the judicial review of administrative action and that, given the rules and process of administrative decision-making in Australia, it is unlikely that an applicant for judicial review would suffer from being unable to rely on privileged parliamentary material to challenge a minister's decision: see his letter in vol 3, pp 178-179. In response to a request from the Joint Committee, Professor G J Lindell of the University of Melbourne has examined this matter in some detail: see vol 3, pp 164-177. Back
144 AW Bradley and KD Ewing Constitutional and administrative law (12th edition, 1997), p 754. Back
145 The affair concerned allegations that ministers bought shares in the Marconi company when they knew that action by the government would mean that the share price would rise. See Select Committee on Marconi's Wireless and Telegraph Company: Special Report HC (1913) 152; HC (1913) 217; resolution of the House, CJ (1913) 347. Back
146 Letter to the chairman of the Joint Committee from the Attorney General, vol 3, p 178. Back
147 The application of article 9 does not appear to have been considered by the Royal Commission on Tribunals of Inquiry, chaired by Lord Salmon when it reviewed the working of the Act in 1966 (Royal Commission on Tribunals of Inquiry: Report of the Commission (Cmnd 3121). See also, Barry Winetrobe, `Inquiries after Scott: the return of the tribunal of inquiry': Public Law, spring 1997. Back
148 See Report of the Inquiry into the export of Defence Equipment and Dual-use Goods to Iraq and related Prosecutions: HC (1995-96) 115 (5 vols and CD-Rom). Back
149 However, see paragraphs 56-59 above. Back
150 `Proceedings in Parliament' without definition is used in several statutes: e.g. section 41 of the Copyright Act 1986; section 26 of the Public Order Act 1986; section 6 of the Human Rights Act 1998. Section 1 of the Parliamentary Papers Act 1840 uses the undefined expression `proceedings' as part of the phrase `reports, papers, votes or proceedings of either House of Parliament'. Back
151 For references see footnote 18 above. Back
152 22nd ed (1997), p 95. While referring to this definition, J P Joseph Maingot QC, in Parliamentary Privilege in Canada (1997), p 80 gives this supplementary definition: `As a technical parliamentary term, `proceedings' are the events and the steps leading up to some formal action, including a decision, taken by the House in its collective capacity. All of these steps and events, the whole process by which the House reaches a decision (the principal part of which is called debate), are `proceedings''. Back
153 First Report from the Select Committee on the Official Secrets Acts HC (1937-38) 173; Report from the Select Committee on the Official Secrets Acts HC (1938-39) 101. Back
154 HC 101 (1938-39), p v. Back
155 Lake v King (1667), 83 ER 387, 84 ER 226, 290,312,415,417,506,526; 85 ER 128,177; 1 Saund. 131; Halsbury's Laws of England, 4th ed, vol 34, p 598. Back
156 Although the courts consistently refuse to hear evidence questioning debate, practice in respect of other proceedings varies: most recently in Allason v Campbell (1996) TLR 279 the court heard detailed evidence on who initiated and participated in the drafting, signing and tabling of an early day motion, and the reasons for its coming into being (Erskine May, 22nd ed (1997), p 95, fn 4). Back
157 On 14 July 1958 the Speaker ruled that a matter which arises from a question on the order paper is itself a proceeding in Parliament: HC Deb 591 cc 807-809. Back
158 Coffin v Coffin (1808) 4 Mass 1; HC (1938-39) 101. This is referred to by S.A. de Smith in `Parliamentary Privilege and the Bill of Rights' MLR Sept. 1958 as `a decision of strong persuasive authority' (p 479). Back
159 Though such an action would also be considered a serious contempt. See Report of the Committee of Privileges HC (1947) 36. Back
160 Braddock v Tillotsons Newspapers Ltd 2 AER 306. Mrs Braddock did not succeed in her action. For the two petitions for leave for witnesses to appear see CJ (1948) 14. Back
161 Appointed under the Church of England (Assembly) Powers Act 1919. It consists of an equal number of members of both Houses, nominated by the Lord Chancellor and the Speaker: see Erskine May, 22nd ed (1997), p 597. Back
162 Established by the House of Commons (Administration) Act 1978: For both the Act and the Commission see Erskine May, 22nd ed. pp 202-204. See too paragraph 248 below. Back
163 Resolution of the House of Commons, 8 July 1958: `That this House does not consider that Mr. Strauss's letter of the 8th day of February 1957 was a proceeding in Parliament and is of opinion therefore that the letters from the Chairman of the London Electricity Board and the Board's Solicitors constituted no breach of Privilege': CJ (1957-58) 260; HC Deb 591 cc 207-346. This resolution had the effect of rejecting the contrary recommendation contained in the report of the Committee of Privileges. The Attorney General, a member of the Privileges Committee, opposed the findings of the report: HC (1956-57) 305, pp xxix-xxxi. Back
164 HC (1976-77) 417, paragraph 7 (the paragraph containing this recommendation was not put to the House of Commons for approval). Back
165 Second Report HL (1969-70) 109; HC 261, p 5, paragraph 1(b) of the proposed statutory definition. For a judicial consideration of a member's functions, other than in relation to proceedings, see Attorney General of Ceylon v de Livera AC 103 (judicial committee of the Privy Council). Back
166 HC (1966-67) 34, p xxvii, paragraph 86. Back
167 HC (1976-77) 417, paragraph 7. Back
168 QQ 512, 462. Back
169 Paragraphs 36-41 above. Back
170 QQ 412-416, 648; vol 2, p 160, paragraph 5. Back
171 Q 416. Back
172 R v Rule 2 KB 375 (complaint by a constituent to an MP about the conduct of a policeman and a magistrate and asking for his assistance in bringing the matter to the attention of a minister: a member had sufficient interest in the subject matter of the complaint to permit the occasion of the publication of the complaint to be privileged at common law); Beach v Freeson 1 QB 14 (letters by a member to the Lord Chancellor and the Law Society complaining of the conduct of a solicitor, based on representations from a constituent. A member of Parliament had both an interest and a duty to communicate appropriately any substantial complaint from a constituent concerning a professional person or firm). Back
173 Report of the House of Commons Select Committee on the Official Secrets Act (1939): HC (1938-1939) 101, paragraph 4 (our emphasis in italics). Back
174 CJ (1938-39) 480. Back
175 Vol 3, pp 22-23 and 49. The Whitley Committees (there is one for each House) are joint bodies of management and staff. Their general object is to secure cooperation between the employer and those staff represented by trade unions in matters affecting the departments of the House and the welfare of staff, and to provide machinery for dealing with grievances, provided that the privileges of the House are not affected thereby. Back
176 See paragraphs 240-241 below. Back
177 Paragraph 41. Back
178 2 QB 460. See too Patricia Leopold `Proceedings in Parliament: the grey area', Public Law, Winter 1990. Back
179 The Attorney General lodged an appeal against this part of the judgment, but it was never heard as the parties reached an agreement. Back
180 The judge refused to permit Mr Rost to call evidence that the article had led to his deselection from a standing committee and had adversely affected his chances of being appointed chairman of the select committee on energy, on the ground that this would involve examining proceedings of the House. The correctness of this decision was subsequently doubted by the judicial committee of the Privy Council in Prebble v Television New Zealand 1 AC 321, 337, as betraying some confusion between the right to prove the occurrence of parliamentary events (see paragraph 41 above) and the embargo on questioning their propriety. This case is another example of the difficulties confronting a member who seeks to clear his name in respect of a statement which he considers defamatory regarding his parliamentary behaviour: see paragraphs 60-62 above. Back
181 The registers are more fully described in the Ninth Report of the Committee of Standards and Privileges, Public access to registers of interests, HC (1997-98) 437. Back
182 Vol 2, p 108. Back
183 See paragraphs 319-322 below; HC (1956-57) 305, paragraph 19; HC Deb 529 cols 397-398 (4 December 1957); Re Parliamentary Privilege Act 1770 AC. 331; Cmnd 431. There are at least two other occasions when, at the instance of the House of Commons, the Crown has made a special reference to the judicial committee under section 4 of the Judicial Committee Act 1833. Both references sought opinions on the interpretation of statutes imposing particular disqualifications for sitting and voting in the House: Re. Sir Stuart Samuel AC 514; Re. Rev J. G. MacManaway AC 161. In each case the House adopted the advice of the judicial committee. On this and the Strauss case generally, see S. de Smith `Parliamentary Privilege and the Bill of Rights', Modern Law Review, vol 21, No 5, September 1958. Back
184 e.g. QQ 475, 652, 743, 874. Back
185 Claim of Right Act 1689 (c 28). Back
186 e.g. HC Deb 22 July 1993 cc 519-520; Geoffrey Lock, `The 1689 Bill of Rights', Political Studies XXXVII, December 1989, pp 556-558. Back
187 QQ 606-607, 734-35; vol 2, p 159. Bribery and Article 9 Back
© Parliamentary copyright 1999 Prepared 9 April 1999 | http://jfav.blogspot.com/ | 13 |
19 | Framing effect (psychology)
Framing effect is an example of cognitive bias, in which people react differently to a particular choice depending on whether it is presented as a loss or as a gain. People tend to avoid risk when a positive frame is presented but seek risks when a negative frame is presented. Gain and loss are defined in the scenario as descriptions of outcomes (e.g. lives lost or saved, disease patients treated and not treated, lives saved and lost during accidents, etc.). The problem also defines the outcomes in terms of difference in likeliness, probability, or risk of their occurrence.
Prospect theory shows that a loss is more significant than the equivalent gain and also that a sure gain (certainty effect and pseudocertainty effect) is favored over a probabilistic gain and that a probabilistic loss is preferred to a definite loss. One of the dangers of framing effects is that people are often provided options within the context of only one of the two frames.
The concept helps to develop an understanding of frame analysis within social movements, and also in the formation of political opinion where spin plays a large role in political opinion polls that are framed to encourage a response beneficial to the organization that has commissioned the poll. It has been suggested that the use of the technique is discrediting political polls themselves. The effect reduces, or even eliminates, if ample, credible information provided to people.
Participants were asked to choose between two treatments for 600 people affected by a deadly disease. Treatment A was predicted to result in 400 deaths, whereas treatment B had a 33% chance that no one would die but a 66% chance that everyone would die. This choice was then presented to participants either with positive framing, i.e. how many people would live, or with negative framing, i.e. how many people would die.
|Framing||Treatment A||Treatment B|
|Positive||"Saves 200 lives"||"A 33% chance of saving all 600 people, 66% possibility of saving no one."|
|Negative||"400 people will die"||"A 33% chance that no people will die, 66% probability that all 600 will die."|
Treatment A was chosen by 72% of participants when it was presented with positive framing ("saves 200 lives") dropping to only 22% when the same choice was presented with negative framing ("400 people will die").
This effect has been shown in other contexts:
- 93% of PhD students registered early when a penalty fee for late registration was emphasized, with only 67% doing so when this was presented as a discount for earlier registration.
- 62% of people disagreed with allowing "public condemnation of democracy", but only 46% of people agreed that it was right to "forbid public condemnation of democracy".(Rugg, as cited in Plous, 1993)
- More people will support an economic policy if the employment rate is emphasised than when the associated unemployment rates is highlighted.
- It has been argued that pretrial detention may increase a defendant's willingness to accept a plea bargain, since imprisonment, rather than freedom, will be his baseline, and pleading guilty will be viewed as an event that will cause his earlier release rather than as an event that will put him in prison.
The framing effect has consistently proven to be one of the strongest biases in decision making. In general, susceptibility to framing effects increases with age. Age difference factors are particularly important when considering health care and financial decisions.
Childhood and Adolescence
Framing effects in decision-making become stronger as children age. This is partially because qualitative reasoning increases with age. While preschoolers are more likely to make decisions based on quantitative properties, such as probability of an outcome, elementary schoolers and adolescents become progressively more likely to reason qualitatively, opting for a sure option in a gain frame and a risky option in a loss frame regardless of probabilities. The increase in qualitative thinking is related to an increase in “gist based” thinking that occurs over a lifetime.
However, qualitative reasoning, and thus susceptibility to framing effects, is still not as strong in adolescents as in adults, and adolescents are more likely than adults to choose the risky option under both the gain and loss frames of a given scenario. One explanation for adolescent tendencies toward risky choices is that they lack real-world experience with negative consequences, and thus over-rely on conscious evaluation of risks and benefits, focusing on specific information and details or quantitative analysis. This reduces influence of framing effects and leads to greater consistency across frames of a given scenario. Children between the ages of 10-12 are more likely to take risks and show framing effects, while younger children only considered the quantitative differences between the two options presented.
Younger adults are more likely than older adults to be enticed by risk-taking when presented with loss frame trials.
In multiple studies of undergraduate students, researchers have found that students are more likely to prefer options framed positively. For example, they are more likely to enjoy meat labeled 75% lean meat as opposed to 25% fat, or use condoms advertised as being 95% effective as opposed to having a 5% risk of failure.
Young adults are especially susceptible to framing effects when presented with an ill-defined problem in which there is no correct answer and individuals must arbitrarily determine what information they consider relevant. For example, undergraduate students are more willing to purchase an item such as a movie ticket after losing an amount equivalent to the item’s cost than after losing the item itself.
The framing effect is greater in older adults than in younger adults or adolescents. This may be a result of enhanced negativity bias, though some sources claim that the negativity bias actually decreases with age. Another possible cause is that older adults have fewer cognitive resources available to them and are more likely to default to less cognitively demanding strategies when faced with a decision. They tend to rely on easily accessible information, or frames, regardless of whether that information is relevant to making the decision in question. Several studies have shown that younger adults will make less biased decisions than older adults because they base their choices on interpretations of patterns of events and can better employ decision making strategies that require cognitive resources like working-memory skills. Older adults, on the other hand, make choices based on immediate reactions to gains and losses.
Older adults' lack of cognitive resources, such as flexibility in decision making strategies, may cause older adults to be influenced by emotional frames more so than younger adults or adolescents. In addition, as individuals age, they make decisions more quickly than their younger counterparts. Johnson; Mata, Schooler, and Rieskamp). It is significant that, when prompted to do so, older adults will often make a less biased decision with reevaluation of their original choice.
The increase in framing effects among older adults has important implications, especially in medical contexts. Older adults are influenced heavily by the inclusion or exclusion of extraneous details, meaning they are likely to make serious medical decisions based on how doctors frame the two options rather than the qualitative differences between the options, causing older adults to inappropriately form their choices.
When considering cancer treatments, framing can shift older adults’ focus from short- to long-term survival under a negative and positive frame, respectively. When presented with treatment descriptions described in positive, negative, or neutral terms, older adults are significantly more likely to agree to a treatment when it is positively described than they are to agree to the same treatment when it is described neutrally or negatively. Additionally, framing often leads to inconsistency in choice: a change in description qualities after an initial choice is made can cause older adults to revoke their initial decision in favor of an alternative option. Older adults also remember positively framed statements more accurately than negatively framed statements. This has been demonstrated by evaluating older adults’ recall of statements in pamphlets about health care issues.
- Choice architecture
- Overton window
- Prospect theory
- Status quo bias
- Thinking, Fast and Slow
- Fuzzy-trace theory
References and sources
|This section is empty. You can help by adding to it. (May 2013)|
- "a Glass Half Empty Is More Persuasive Than a Glass Half Full".
- Plous, 1993
- Tversky & Kahneman, 1981
- Rothman, A. J.; Salovey, P.; Antone, C.; Keough, K.; Martin, C. D. (1993). "The Influence of Message Framing on Intentions to Perform Health Behaviors". Journal of Experimental Social Psychology 29 (5): 408. doi:10.1006/jesp.1993.1019.
- Clark, 2009
- Druckman, 2001a
- Druckman, 2001b
- Gätcher, Orzen, Renner, & Stamer, (2009)
- Stephanos Bibas (June 2004). Plea Bargaining outside the Shadow of Trial 117 (8). Harvard Law Review. pp. 2463–2547
- Thomas, A. K.; Millar, P. R. (2011). "Reducing the Framing Effect in Older and Younger Adults by Encouraging Analytic Processing". The Journals of Gerontology Series B: Psychological Sciences and Social Sciences 67B (2): 139. doi:10.1093/geronb/gbr076.
- Erber, Joan (2013). Aging and Older Adulthood (3 ed.). John Wiley & Sons. p. 218.
- Peters, Ellen; Finucane, Melissa; MacGregor, Donald; Slovic, Paul (2000). "The Bearable Lightness of Aging: Judgment and Decision Processes in Older Adults". In Paul C. Stern and Laura L. Carstensen. The aging mind: opportunities in cognitive research. Washington, D.C.: National Academy Press. ISBN 0-309-06940-8.
- Hanoch, Yaniv; Thomas Rice (2006). "Can Limiting Choice Increase Social Welfare? The Elderly and Health Insurance". The Milbank Quarterly 84 (1): 37–73. Retrieved 2013-04-16.
- Carpenter, S. M.; Yoon, C. (2011). "Aging and consumer decision making". Annals of the New York Academy of Sciences 1235 (1): E1–E12. doi:10.1111/j.1749-6632.2011.06390.x. PMID 22360794.
- Reyna, V. F.; Farley, F. (2006). "Risk and Rationality in Adolescent Decision Making: Implications for Theory, Practice, and Public Policy". Psychological Science in the Public Interest 7: 1. doi:10.1111/j.1529-1006.2006.00026.x.
- Albert, D.; Steinberg, L. (2011). "Judgment and Decision Making in Adolescence". Journal of Research on Adolescence 21: 211. doi:10.1111/j.1532-7795.2010.00724.x.
- Strough, J.; Karns, T. E.; Schlosnagle, L. (2011). "Decision-making heuristics and biases across the life span". Annals of the New York Academy of Sciences 1235: 57–74. doi:10.1111/j.1749-6632.2011.06208.x. PMID 22023568.
- Reyna, V. F. (2008). "A Theory of Medical Decision Making and Health: Fuzzy Trace Theory". Medical Decision Making 28 (6): 850–865. doi:10.1177/0272989X08327066. PMC 2617718. PMID 19015287.
- Schlottmann, A.; Tring, J. (2005). "How Children Reason about Gains and Losses: Framing Effects in Judgement and Choice". Swiss Journal of Psychology 64 (3): 153. doi:10.1024/1421-022.214.171.124.
- Boyer, T. (2006). "The development of risk-taking: A multi-perspective review". Developmental Review 26 (3): 291–345. doi:10.1016/j.dr.2006.05.002.
- Revlin, Russell (2013). "Chapter 11: Solving Problems". Cognition Theory and Practice. New York, New York: Worth Publishers.
- Watanabe, S.; Shibutani, H. (2010). "Aging and decision making: Differences in susceptibility to the risky-choice framing effect between older and younger adults in Japan". Japanese Psychological Research 52 (3): 163. doi:10.1111/j.1468-5884.2010.00432.x.
- Löckenhoff, C. E. (2011). "Age, time, and decision making: From processing speed to global time horizons". Annals of the New York Academy of Sciences 1235: 44–56. doi:10.1111/j.1749-6632.2011.06209.x. PMID 22023567.
- Clark, D (2009). Framing effects exposed. Pearson Education.
- Druckman, J. (2001a). "Evaluating framing effects". Journal of Economic Psychology 22: 96–101.
- Druckman, J. (2001b). "Using credible advice to overcome framing effects". Journal of Law, Economics, and Organization 17: 62–82. doi:10.1093/jleo/17.1.62.
- Gätcher, S.; Orzen, H.; Renner, E.; Stamer, C. (2009). "Are experimental economists prone to framing effects? A natural field experiment". Journal of Economic Behavior & Organization.
- Plous, Scott (1993). The psychology of judgment and decision making. McGraw-Hill. ISBN 978-0-07-050477-6.
- Tversky, Amos; Kahneman, Daniel (1981). "The Framing of decisions and the psychology of choice". Science 211 (4481): 453–458. doi:10.1126/science.7455683. PMID 7455683. | http://en.wikipedia.org/wiki/Framing_effect_(psychology) | 13 |
66 | Confirmation bias (also called confirmatory bias or myside bias) is a tendency of people to favor information that confirms their beliefs or hypotheses.[Note 1] People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. For example, in reading about gun control, people usually prefer sources that affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations).
A series of experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. In certain situations, this tendency can bias people's conclusions. Explanations for the observed biases include wishful thinking and the limited human capacity to process information. Another explanation is that people show confirmation bias because they are weighing up the costs of being wrong, rather than investigating in a neutral, scientific way.
Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. Poor decisions due to these biases have been found in military, political, and organizational contexts.
Confirmation biases are effects in information processing, distinct from the behavioral confirmation effect, also called "self-fulfilling prophecy", in which people's expectations affect their behaviour to make the expectations come true. Some psychologists use "confirmation bias" to refer to any way in which people avoid rejecting a belief, whether in searching for evidence, interpreting it, or recalling it from memory. Others restrict the term to selective collection of evidence.[Note 2]
Biased search for information
Experiments have repeatedly found that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with the hypothesis they hold at a given time. Rather than searching through all the relevant evidence, they ask questions that are phrased so that an affirmative answer supports their hypothesis. They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if it were false. For example, someone who is trying to identify a number using yes/no questions and suspects that the number is 3 might ask, "Is it an odd number?" People prefer this sort of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information. However, this does not mean that people seek tests that are guaranteed to give a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.
The preference for positive tests is not itself a bias, since positive tests can be highly informative. However, in conjunction with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true. In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior. Thus any search for evidence in favor of a hypothesis is likely to succeed. One illustration of this is the way the phrasing of a question can significantly change the answer. For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"
Even a small change in the wording of a question can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case. Subjects read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take him or her away for long periods. When asked, "Which parent should have custody of the child?" the subjects looked for positive attributes and a majority chose Parent B. However, when the question was, "Which parent should be denied custody of the child?" they looked for negative attributes, but again a majority answered Parent B, implying that Parent A should have custody.
Similar studies have demonstrated how people engage in biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests, where they are available. In an initial experiment, subjects had to rate another person on the introversion-extroversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the subjects chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them. However, a later version of the experiment gave the subjects less presumptive questions to choose from, such as, "Do you shy away from social interactions?" Subjects preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.
Another experiment gave subjects a particularly complex rule-discovery task involving moving objects simulated by a computer. Objects on the computer screen followed specific laws, which the subjects had to figure out. They could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten-hour session, none of the subjects worked out the rules of the system. They typically sought to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing evidence that objectively refuted their working hypotheses, they frequently continued doing the same tests. Some of the subjects were instructed in proper hypothesis-testing, but these instructions had almost no effect.
Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.
A team at Stanford University ran an experiment with subjects who felt strongly about capital punishment, with half in favor and half against. Each of these subjects read descriptions of two studies; a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions had changed. They then read a much more detailed account of each study's procedure and had to rate how well-conducted and convincing that research was. In fact, the studies were fictional. Half the subjects were told that one kind of study supported the deterrent effect and the other undermined it, while for other subjects the conclusions were swapped.
The subjects, whether proponents or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Subjects described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time", while an opponent's comment on the same study said, "No strong evidence to contradict the researchers has been presented". The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.
A study of biased interpretation took place during the 2004 US presidential election and involved subjects who described themselves as having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidate George W. Bush, Democratic candidate John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not each individual's statements were inconsistent. There were strong differences in these evaluations, with subjects much more likely to interpret statements by the candidate they opposed as contradictory.
In this experiment, the subjects made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As subjects evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the subjects were actively reducing the cognitive dissonance induced by reading about their favored candidate's irrational or hypocritical behavior.
Biased interpretation is not restricted to emotionally significant topics. In another experiment, subjects were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.
Even if people have sought and interpreted evidence in a neutral manner, they may still remember it selectively to reinforce their expectations. This effect is called "selective recall", "confirmatory memory" or "access-biased memory". Psychological theories differ in their predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled. Some alternative approaches say that surprising information stands out more and so is more memorable. Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.
In one study, subjects read a profile of a woman which described a mix of introverted and extroverted behaviors. They later had to recall examples of her introversion and extroversion. One group was told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior. A selective memory effect has also been shown in experiments that manipulate the desirability of personality types. In one of these, a group of subjects were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of subjects provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.
One study showed how selective memory can maintain belief in extrasensory perception (ESP). Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, subjects recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.
A similar cognitive bias found in individuals is the backfire effect, in which individuals challenged with evidence contradictory to their beliefs tend to reject the evidence and instead become an even firmer supporter of their initial belief. The phrase was first coined by Brendan Nyhan and Jason Reifler in a paper entitled "When Corrections Fail: The persistence of political misperceptions".
Polarization of opinion
When people with opposing views interpret new information in a biased way, their views can move even further apart. This is called "attitude polarization". The effect was demonstrated by an experiment that involved drawing a series of red and black balls from one of two concealed "bingo baskets". Subjects knew that one basket contained 60% black and 40% red balls; the other, 40% black and 60% red. The experimenters looked at what happened when balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn, subjects in one group were asked to state out loud their judgments of the probability that the balls were being drawn from one or the other basket. These subjects tended to grow more confident with each successive draw—whether they initially thought the basket with 60% black balls or the one with 60% red balls was the more likely source, their estimate of the probability increased. Another group of subjects were asked to state probability estimates only at the end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit to them.
A less abstract study was the Stanford biased interpretation experiment in which subjects with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the subjects reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes. In later experiments, subjects also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real. Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases. They found that it was prompted not only by considering mixed evidence, but by merely thinking about the topic.
Charles Taber and Milton Lodge argued that the Stanford team's result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and Lodge study used the emotionally charged topics of gun control and affirmative action. They measured the attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the experimenters. For example they could read the National Rifle Association's and the Brady Anti-Handgun Coalition's arguments on gun control. Even when instructed to be even-handed, subjects were more likely to read arguments that supported their existing attitudes. This biased search for information correlated well with the polarization effect.
Persistence of discredited beliefs
Confirmation biases can be used to explain why some beliefs remain when the initial evidence for them is removed. This belief perseverance effect has been shown by a series of experiments using what is called the "debriefing paradigm": subjects read fake evidence for a hypothesis, their attitude change is measured, then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level.
A typical finding is that at least some of the initial belief remains even after a full debrief. In one experiment, subjects had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, subjects were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.
In another study, subjects read job performance ratings of two firefighters, along with their responses to a risk aversion test. These fictional data were arranged to show either a negative or positive association: some subjects were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse colleague. Even if these two case studies had been true, they would have been scientifically poor evidence for a conclusion about firefighters in general. However, the subjects found them subjectively persuasive. When the case studies were shown to be fictional, subjects' belief in a link diminished, but around half of the original effect remained. Follow-up interviews established that the subjects had understood the debriefing and taken it seriously. Subjects seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.
Preference for early information
Experiments have shown that information is weighted more strongly when it appears early in a series, even when the order is unimportant. For example, people form a more positive impression of someone described as "intelligent, industrious, impulsive, critical, stubborn, envious" than when they are given the same words in reverse order. This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace. Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.
One demonstration of irrational primacy involved colored chips supposedly drawn from two urns. Subjects were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them. In fact, the colors appeared in a pre-arranged order. The first thirty draws favored one urn and the next thirty favored the other. The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, subjects favored the urn suggested by the initial thirty.
Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus with each succeeding slide. After each slide, subjects had to state their best guess of what the object was. Subjects whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that other people could readily identify the object.
Illusory association between events
Illusory correlation is the tendency to see non-existent correlations in a set of data. This tendency was first demonstrated in a series of experiments in the late 1960s. In one experiment, subjects read a set of psychiatric case studies, including responses to the Rorschach inkblot test. They reported that the homosexual men in the set were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the case studies were fictional and, in one version of the experiment, had been constructed so that the homosexual men were less likely to report this imagery. In a survey, a group of experienced psychoanalysts reported the same set of illusory associations with homosexuality.
Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.
This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior. In judging whether two events, such as illness and bad weather, are correlated, people rely heavily on the number of positive-positive cases: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather). This parallels the reliance on positive tests in hypothesis testing. It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.
In the above fictional example, arthritic symptoms are more likely on days with no rain. However, people are likely to focus on the relatively large number of days which have both rain and symptoms. By concentrating on one cell of the table rather than all four, people can misperceive the relationship, in this case associating rain with arthritic symptoms.
Before psychological research on confirmation bias, the phenomenon had been observed anecdotally by writers, including the Greek historian Thucydides (c. 460 BC – c. 395 BC), Italian poet Dante Alighieri (1265–1321), English philosopher and scientist Francis Bacon (1561–1626), and Russian author Leo Tolstoy (1828–1910). Thucydides, in The Peloponnesian War wrote: "…for it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy." In the Divine Comedy, St. Thomas Aquinas cautions Dante when they meet in Paradise, "opinion—hasty—often can incline to the wrong side, and then affection for one's own opinion binds, confines the mind." Bacon, in the Novum Organum, wrote,
The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.]
I know that most men—not only those considered clever, but even those who are very clever, and capable of understanding most difficult scientific, mathematical, or philosophic problems—can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others, and on which they have built their lives.
Wason's research on hypothesis-testing
The term "confirmation bias" was coined by English psychologist Peter Wason. For an experiment published in 1960, he challenged subjects to identify a rule applying to triples of numbers. At the outset, they were told that (2,4,6) fits the rule. Subjects could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.
While the actual rule was simply "any ascending sequence", the subjects had a great deal of difficulty in finding it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last". The subjects seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, "Each number is two greater than its predecessor", they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).
Wason accepted falsificationism, according to which a scientific test of a hypothesis is a serious attempt to falsify it. He interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation bias".[Note 3] Wason also used confirmation bias to explain the results of his selection task experiment. In this task, subjects are given partial information about a set of objects, and have to specify what further information they would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule.
Klayman and Ha's critique
A 1987 paper by Joshua Klayman and Young-Won Ha argued that the Wason experiments had not actually demonstrated a bias towards confirmation. Instead, Klayman and Ha interpreted the results in terms of a tendency to make tests that are consistent with the working hypothesis. They called this the "positive test strategy". This strategy is an example of a heuristic: a reasoning shortcut that is imperfect but easy to compute. Klayman and Ha used Bayesian probability and information theory as their standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of information, which depends on the person's prior beliefs. Thus a scientific test of a hypothesis is one that is expected to produce the most information. Since the information content depends on initial probabilities, a positive test can either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests. However, in Wason's rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing an experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". This avoided implying that the aim was to find a low-probability rule. Subjects had much more success with this version of the experiment.
In light of this and other critiques, the focus of research moved away from confirmation versus falsification to examine whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.
Confirmation bias is often described as a result of automatic, unintentional strategies rather than deliberate deception. According to Robert Maccoun, most biased evidence processing occurs through a combination of both "cold" (cognitive) and "hot" (motivated) mechanisms.
Cognitive explanations for confirmation bias are based on limitations in people's ability to handle complex tasks, and the shortcuts, called heuristics, that they use. For example, people may judge the reliability of evidence by using the availability heuristic, i.e. how readily a particular idea comes to mind. It is also possible that people can only focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel. Another heuristic is the positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges to their existing beliefs.
Motivational explanations involve an effect of desire on belief, sometimes called "wishful thinking". It is known that people prefer pleasant thoughts over unpleasant ones in a number of ways: this is called the "Pollyanna principle". Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true. According to experiments that manipulate the desirability of the conclusion, people demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they ask, "Can I believe this?" for some suggestions and, "Must I believe this?" for others. Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information. Social psychologist Ziva Kunda combines the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the size of the effect.
Explanations in terms of cost-benefit analysis assume that people do not just test hypotheses in a disinterested way, but assess the costs of different errors. Using ideas from evolutionary psychology, James Friedrich suggests that people do not primarily aim at truth in testing hypotheses, but try to avoid the most costly errors. For example, employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable candidates. Yaacov Trope and Akiva Liberman's refinement of this theory assumes that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who underestimates a friend's honesty might treat him or her suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way. When someone gives an initial impression of being introverted or extroverted, questions that match that impression come across as more empathic. This suggests that when talking to someone who seems to be an introvert, it is a sign of better social skills to ask, "Do you feel awkward in social situations?" rather than, "Do you like noisy parties?" The connection between confirmation bias and social skills was corroborated by a study of how college students get to know other people. Highly self-monitoring students, who are more sensitive to their environment and to social norms, asked more matching questions when interviewing a high-status staff member than when getting to know fellow students.
Psychologists Jennifer Lerner and Philip Tetlock distinguish two different kinds of thinking process. Exploratory thought neutrally considers multiple points of view and tries to anticipate all possible objections to a particular position, while confirmatory thought seeks to justify a specific point of view. Lerner and Tetlock say that when people expect to need to justify their position to other people, whose views they already know, they will tend to adopt a similar position to those people, and then use confirmatory thought to bolster their own credibility. However, if the external parties are overly aggressive or critical, people will disengage from thought altogether, and simply assert their personal opinions without justification. Lerner and Tetlock say that people only push themselves to think critically and logically when they know in advance they will need to explain themselves to others who are well-informed, genuinely interested in the truth, and whose views they don't already know. Because those conditions rarely exist, they argue, most people are using confirmatory thought most of the time.
Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose money. In studies of political stock markets, investors made more profit when they resisted bias. For example, participants who interpreted a candidate's debate performance in a neutral rather than partisan way were more likely to profit. To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint "for the sake of argument". In one technique, they imagine that their investments have collapsed and ask themselves why this might happen.
In physical and mental health
Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used for centuries before the arrival of scientific medicine. If a patient recovered, medical authorities counted the treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural course. Biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.
Cognitive therapy was developed by Aaron T. Beck in the early 1960s and has become a popular approach. According to Beck, biased information processing is a factor in depression. His approach teaches people to treat evidence impartially, rather than selectively reinforcing negative outlooks. Phobias and hypochondria have also been shown to involve confirmation bias for threatening information.
In politics and law
Nickerson argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to. Since the evidence in a jury trial can be complex, and jurors often reach decisions about the verdict early on, it is reasonable to expect an attitude polarization effect. The prediction that jurors will become more extreme in their views as they see more evidence has been borne out in experiments with mock trials. Both inquisitorial and adversarial criminal justice systems are affected by confirmation bias.
Confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position. On the other hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient conflict. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that US Admiral Husband E. Kimmel showed confirmation bias when playing down the first signs of the Japanese attack on Pearl Harbor.
A two-decade study of political pundits by Philip E. Tetlock found that, on the whole, their predictions were not much better than chance. Tetlock divided experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on confirmation bias—specifically, their inability to make use of new information that contradicted their existing theories.
In the paranormal
One factor in the appeal of psychic "readings" is that listeners apply a confirmation bias which fits the psychic's statements to their own lives. By making a large number of ambiguous statements in each sitting, the psychic gives the client more opportunities to find a match. This is one of the techniques of cold reading, with which a psychic can deliver a subjectively impressive reading without any prior information about the client. Investigator James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective recall of the "hits".
As a striking illustration of confirmation bias in the real world, Nickerson mentions numerological pyramidology: the practice of finding meaning in the proportions of the Egyptian pyramids. There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.
A distinguishing feature of scientific thinking is the search for falsifying as well as confirming evidence. However, many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data. Previous research has shown that the assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. It has been found several times that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs. However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the found results should be of importance to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions.
Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs. Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.
In the context of scientific research, confirmation biases can sustain theories or research programs in the face of inadequate or even contradictory evidence; the field of parapsychology has been particularly affected.
An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to prevent bias. For example, experimental design of randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias. The social process of peer review is thought to mitigate the effect of individual scientists' biases, even though the peer review process itself may be susceptible to such biases.
Social psychologists have identified two tendencies in the way people seek or interpret information about themselves. Self-verification is the drive to reinforce the existing self-image and self-enhancement is the drive to seek positive feedback. Both are served by confirmation biases. In experiments where people are given feedback that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying feedback. They reduce the impact of such information by interpreting it as unreliable. Similar experiments have found a preference for positive feedback, and the people who give it, over negative feedback.
- Backfire effect
- Cherry picking (fallacy)
- Cognitive bias mitigation
- Cognitive inertia
- Experimenter's bias
- Congruence bias
- Filter bubble
- Hostile media effect
- List of biases in judgment and decision making
- List of memory biases
- Observer-expectancy effect
- Reporting bias
- Publication bias
- Selective exposure theory
- Semmelweis reflex
- Woozle effect
- David Perkins, a geneticist, coined the term "myside bias" referring to a preference for "my" side of an issue. (Baron 2000, p. 195)
- "Assimilation bias" is another term used for biased interpretation of evidence. (Risen & Gilovich 2007, p. 113)
- Wason also used the term "verification bias". (Poletiek 2001, p. 73)
- Plous 1993, p. 233
- Darley, John M.; Gross, Paget H. (2000), "A Hypothesis-Confirming Bias in Labelling Effects", in Stangor, Charles, Stereotypes and prejudice: essential readings, Psychology Press, p. 212, ISBN 978-0-86377-589-5, OCLC 42823720
- Risen & Gilovich 2007
- Zweig, Jason (November 19, 2009), "How to Ignore the Yes-Man in Your Head", Wall Street Journal (Dow Jones & Company), retrieved 2010-06-13
- Nickerson 1998, pp. 177–178
- Kunda 1999, pp. 112–115
- Baron 2000, pp. 162–164
- Kida 2006, pp. 162–165
- Devine, Patricia G.; Hirt, Edward R.; Gehrke, Elizabeth M. (1990), "Diagnostic and confirmation strategies in trait hypothesis testing", Journal of Personality and Social Psychology (American Psychological Association) 58 (6): 952–963, doi:10.1037/0022-3522.214.171.1242, ISSN 1939-1315
- Trope, Yaacov; Bassok, Miriam (1982), "Confirmatory and diagnosing strategies in social information gathering", Journal of Personality and Social Psychology (American Psychological Association) 43 (1): 22–34, doi:10.1037/0022-35126.96.36.199, ISSN 1939-1315
- Klayman, Joshua; Ha, Young-Won (1987), "Confirmation, Disconfirmation and Information in Hypothesis Testing", Psychological Review (American Psychological Association) 94 (2): 211–228, doi:10.1037/0033-295X.94.2.211, ISSN 0033-295X, retrieved 2009-08-14
- Oswald & Grosjean 2004, pp. 82–83
- Kunda, Ziva; Fong, G.T.; Sanitoso, R.; Reber, E. (1993), "Directional questions direct self-conceptions", Journal of Experimental Social Psychology (Society of Experimental Social Psychology) 29: 62–63, ISSN 0022-1031 via Fine 2006, pp. 63–65
- Shafir, E. (1993), "Choosing versus rejecting: why some options are both better and worse than others", Memory and Cognition 21 (4): 546–556, PMID 8350746 via Fine 2006, pp. 63–65
- Snyder, Mark; Swann, Jr., William B. (1978), "Hypothesis-Testing Processes in Social Interaction", Journal of Personality and Social Psychology (American Psychological Association) 36 (11): 1202–1212, doi:10.1037/0022-35188.8.131.522 via Poletiek 2001, p. 131
- Kunda 1999, pp. 117–118
- Mynatt, Clifford R.; Doherty, Michael E.; Tweney, Ryan D. (1978), "Consequences of confirmation and disconfirmation in a simulated research environment", Quarterly Journal of Experimental Psychology 30 (3): 395–406, doi:10.1080/00335557843000007
- Kida 2006, p. 157
- Lord, Charles G.; Ross, Lee; Lepper, Mark R. (1979), "Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence", Journal of Personality and Social Psychology (American Psychological Association) 37 (11): 2098–2109, doi:10.1037/0022-35184.108.40.2068, ISSN 0022-3514
- Baron 2000, pp. 201–202
- Vyse 1997, p. 122
- Taber, Charles S.; Lodge, Milton (July 2006), "Motivated Skepticism in the Evaluation of Political Beliefs", American Journal of Political Science (Midwest Political Science Association) 50 (3): 755–769, doi:10.1111/j.1540-5907.2006.00214.x, ISSN 0092-5853
- Westen, Drew; Blagov, Pavel S.; Harenski, Keith; Kilts, Clint; Hamann, Stephan (2006), "Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election", Journal of Cognitive Neuroscience (Massachusetts Institute of Technology) 18 (11): 1947–1958, doi:10.1162/jocn.2006.18.11.1947, PMID 17069484, retrieved 2009-08-14
- Gadenne, V.; Oswald, M. (1986), "Entstehung und Veränderung von Bestätigungstendenzen beim Testen von Hypothesen [Formation and alteration of confirmatory tendencies during the testing of hypotheses]", Zeitschrift für experimentelle und angewandte Psychologie 33: 360–374 via Oswald & Grosjean 2004, p. 89
- Hastie, Reid; Park, Bernadette (2005), "The Relationship Between Memory and Judgment Depends on Whether the Judgment Task is Memory-Based or On-Line", in Hamilton, David L., Social cognition: key readings, New York: Psychology Press, p. 394, ISBN 0-86377-591-8, OCLC 55078722
- Oswald & Grosjean 2004, pp. 88–89
- Stangor, Charles; McMillan, David (1992), "Memory for expectancy-congruent and expectancy-incongruent information: A review of the social and social developmental literatures", Psychological Bulletin (American Psychological Association) 111 (1): 42–61, doi:10.1037/0033-2909.111.1.42
- Snyder, M.; Cantor, N. (1979), "Testing hypotheses about other people: the use of historical knowledge", Journal of Experimental Social Psychology 15 (4): 330–342, doi:10.1016/0022-1031(79)90042-8 via Goldacre 2008, p. 231
- Kunda 1999, pp. 225–232
- Sanitioso, Rasyid; Kunda, Ziva; Fong, G.T. (1990), "Motivated recruitment of autobiographical memories", Journal of Personality and Social Psychology (American Psychological Association) 59 (2): 229–241, doi:10.1037/0022-35220.127.116.11, ISSN 0022-3514, PMID 2213492
- Russell, Dan; Jones, Warren H. (1980), "When superstition fails: Reactions to disconfirmation of paranormal beliefs", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 6 (1): 83–88, doi:10.1177/014616728061012, ISSN 1552-7433 via Vyse 1997, p. 121
- "backfire effect". The Skeptic's Dictionary. Retrieved 26 April 2012.
- Silverman, Craig (2011-06-17). "The Backfire Effect". Columbia Journalism Review. Retrieved 2012-05-01. "When your deepest convictions are challenged by contradictory evidence, your beliefs get stronger."
- Nyhan, Brendan; Reifler, Jason (2010). "When Corrections Fail: The Persistence of Political Misperceptions". Political Behavior 32 (2): 303–330. doi:10.1007/s11109-010-9112-2. Retrieved 1 May 2012.
- Kuhn, Deanna; Lao, Joseph (March 1996), "Effects of Evidence on Attitudes: Is Polarization the Norm?", Psychological Science (American Psychological Society) 7 (2): 115–120, doi:10.1111/j.1467-9280.1996.tb00340.x
- Baron 2000, p. 201
- Miller, A.G.; McHoskey, J.W.; Bane, C.M.; Dowd, T.G. (1993), "The attitude polarization phenomenon: Role of response measure, attitude extremity, and behavioral consequences of reported attitude change", Journal of Personality and Social Psychology 64 (4): 561–574, doi:10.1037/0022-3518.104.22.1681
- Ross, Lee; Anderson, Craig A. (1982), "Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments", in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and biases, Cambridge University Press, pp. 129–152, ISBN 978-0-521-28414-1, OCLC 7578020
- Nickerson 1998, p. 187
- Kunda 1999, p. 99
- Ross, Lee; Lepper, Mark R.; Hubbard, Michael (1975), "Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm", Journal of Personality and Social Psychology (American Psychological Association) 32 (5): 880–892, doi:10.1037/0022-3522.214.171.1240, ISSN 0022-3514, PMID 1185517 via Kunda 1999, p. 99
- Baron 2000, pp. 197–200
- Fine 2006, pp. 66–70
- Plous 1993, pp. 164–166
- Redelmeir, D. A.; Tversky, Amos (1996), "On the belief that arthritis pain is related to the weather", Proceedings of the National Academy of Sciences 93 (7): 2895–2896, doi:10.1073/pnas.93.7.2895 via Kunda 1999, p. 127
- Kunda 1999, pp. 127–130
- Plous 1993, pp. 162–164
- Adapted from Fielder, Klaus (2004), "Illusory correlation", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, p. 103, ISBN 978-1-84169-351-4, OCLC 55124398
- Baron 2000, pp. 195–196
- Thucydides 4.108.4
- Alighieri, Dante. Paradiso canto XIII: 118–120. Trans. Allen Mandelbaum
- Bacon, Francis (1620). Novum Organum. reprinted in Burtt, E.A., ed. (1939), The English philosophers from Bacon to Mill, New York: Random House, p. 36 via Nickerson 1998, p. 176
- Tolstoy, Leo. What is Art? p. 124 (1899). In The Kingdom of God Is Within You (1893), he similarly declared, "The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him" (ch. 3). Translated from the Russian by Constance Garnett, New York, 1894. Project Gutenberg edition released November 2002. Retrieved 2009-08-24.
- Gale, Maggie; Ball, Linden J. (2002), "Does Positivity Bias Explain Patterns of Performance on Wason's 2-4-6 task?", in Gray, Wayne D.; Schunn, Christian D., Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society, Routledge, p. 340, ISBN 978-0-8058-4581-5, OCLC 469971634
- Wason, Peter C. (1960), "On the failure to eliminate hypotheses in a conceptual task", Quarterly Journal of Experimental Psychology (Psychology Press) 12 (3): 129–140, doi:10.1080/17470216008416717, ISSN 1747-0226
- Nickerson 1998, p. 179
- Lewicka 1998, p. 238
- Oswald & Grosjean 2004, pp. 79–96
- Wason, Peter C. (1968), "Reasoning about a rule", Quarterly Journal of Experimental Psychology (Psychology Press) 20 (3): 273–28, doi:10.1080/14640746808400161, ISSN 1747-0226
- Sutherland, Stuart (2007), Irrationality (2nd ed.), London: Pinter and Martin, pp. 95–103, ISBN 978-1-905177-07-3, OCLC 72151566
- Barkow, Jerome H.; Cosmides, Leda; Tooby, John (1995), The adapted mind: evolutionary psychology and the generation of culture, Oxford University Press US, pp. 181–184, ISBN 978-0-19-510107-2, OCLC 33832963
- Oswald & Grosjean 2004, pp. 81–82, 86–87
- Lewicka 1998, p. 239
- Tweney, Ryan D.; Doherty, Michael E.; Worner, Winifred J.; Pliske, Daniel B.; Mynatt, Clifford R.; Gross, Kimberly A.; Arkkelin, Daniel L. (1980), "Strategies of rule discovery in an inference task", The Quarterly Journal of Experimental Psychology (Psychology Press) 32 (1): 109–123, doi:10.1080/00335558008248237, ISSN 1747-0226 (Experiment IV)
- Oswald & Grosjean 2004, pp. 86–89
- Hergovich, Schott & Burger 2010
- Maccoun 1998
- Friedrich 1993, p. 298
- Kunda 1999, p. 94
- Nickerson 1998, pp. 198–199
- Nickerson 1998, p. 200
- Nickerson 1998, p. 197
- Baron 2000, p. 206
- Matlin, Margaret W. (2004), "Pollyanna Principle", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove: Psychology Press, pp. 255–272, ISBN 978-1-84169-351-4, OCLC 55124398
- Dawson, Erica; Gilovich, Thomas; Regan, Dennis T. (October 2002), "Motivated Reasoning and Performance on the Wason Selection Task", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 28 (10): 1379–1387, doi:10.1177/014616702236869, retrieved 2009-09-30
- Ditto, Peter H.; Lopez, David F. (1992), "Motivated skepticism: use of differential decision criteria for preferred and nonpreferred conclusions", Journal of personality and social psychology (American Psychological Association) 63 (4): 568–584, doi:10.1037/0022-35126.96.36.1998, ISSN 0022-3514
- Nickerson 1998, p. 198
- Oswald & Grosjean 2004, pp. 91–93
- Friedrich 1993, pp. 299, 316–317
- Trope, Y.; Liberman, A. (1996), "Social hypothesis testing: cognitive and motivational mechanisms", in Higgins, E. Tory; Kruglanski, Arie W., Social Psychology: Handbook of basic principles, New York: Guilford Press, ISBN 978-1-57230-100-9, OCLC 34731629 via Oswald & Grosjean 2004, pp. 91–93
- Dardenne, Benoit; Leyens, Jacques-Philippe (1995), "Confirmation Bias as a Social Skill", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 21 (11): 1229–1239, doi:10.1177/01461672952111011, ISSN 1552-7433
- Sandra L. Schneider, ed. (2003). Emerging perspectives on judgment and decision research. Cambridge [u.a.]: Cambridge University Press. p. 445. ISBN 052152718X.
- Haidt, Jonathan (2012). The Righteous Mind : Why Good People are Divided by Politics and Religion. New York: Pantheon Books. pp. 1473–4 (e–book edition). ISBN 0307377903.
- Lindzey, edited by Susan T. Fiske, Daniel T. Gilbert, Gardner (2010). The handbook of social psychology. (5th ed.). Hoboken, N.J.: Wiley. p. 811. ISBN 0470137495.
- Pompian, Michael M. (2006), Behavioral finance and wealth management: how to build optimal portfolios that account for investor biases, John Wiley and Sons, pp. 187–190, ISBN 978-0-471-74517-4, OCLC 61864118
- Hilton, Denis J. (2001), "The psychology of financial decision-making: Applications to trading, dealing, and investment analysis", Journal of Behavioral Finance (Institute of Behavioral Finance) 2 (1): 37–39, doi:10.1207/S15327760JPFM0201_4, ISSN 1542-7579
- Krueger, David; Mann, John David (2009), The Secret Language of Money: How to Make Smarter Financial Decisions and Live a Richer Life, McGraw Hill Professional, pp. 112–113, ISBN 978-0-07-162339-1, OCLC 277205993
- Nickerson 1998, p. 192
- Goldacre 2008, p. 233
- Singh, Simon; Ernst, Edzard (2008), Trick or Treatment?: Alternative Medicine on Trial, London: Bantam, pp. 287–288, ISBN 978-0-593-06129-9
- Atwood, Kimball (2004), "Naturopathy, Pseudoscience, and Medicine: Myths and Fallacies vs Truth", Medscape General Medicine 6 (1): 33
- Neenan, Michael; Dryden, Windy (2004), Cognitive therapy: 100 key points and techniques, Psychology Press, p. ix, ISBN 978-1-58391-858-6, OCLC 474568621
- Blackburn, Ivy-Marie; Davidson, Kate M. (1995), Cognitive therapy for depression & anxiety: a practitioner's guide (2 ed.), Wiley-Blackwell, p. 19, ISBN 978-0-632-03986-9, OCLC 32699443
- Harvey, Allison G.; Watkins, Edward; Mansell, Warren (2004), Cognitive behavioural processes across psychological disorders: a transdiagnostic approach to research and treatment, Oxford University Press, pp. 172–173, 176, ISBN 978-0-19-852888-3, OCLC 602015097
- Nickerson 1998, pp. 191–193
- Myers, D.G.; Lamm, H. (1976), "The group polarization phenomenon", Psychological Bulletin 83 (4): 602–627, doi:10.1037/0033-2909.83.4.602 via Nickerson 1998, pp. 193–194
- Halpern, Diane F. (1987), Critical thinking across the curriculum: a brief edition of thought and knowledge, Lawrence Erlbaum Associates, p. 194, ISBN 978-0-8058-2731-6, OCLC 37180929
- Roach, Kent (2010), "Wrongful Convictions: Adversarial and Inquisitorial Themes", North Carolina Journal of International Law and Commercial Regulation 35, SSRN 1619124, "Both adversarial and inquisitorial systems seem subject to the dangers of tunnel vision or confirmation bias."
- Baron 2000, pp. 191,195
- Kida 2006, p. 155
- Tetlock, Philip E. (2005), Expert Political Judgment: How Good Is It? How Can We Know?, Princeton, N.J.: Princeton University Press, pp. 125–128, ISBN 978-0-691-12302-8, OCLC 56825108
- Smith, Jonathan C. (2009), Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit, John Wiley and Sons, pp. 149–151, ISBN 978-1-4051-8122-8, OCLC 319499491
- Randi, James (1991), James Randi: psychic investigator, Boxtree, pp. 58–62, ISBN 978-1-85283-144-8, OCLC 26359284
- Nickerson 1998, p. 190
- Nickerson 1998, pp. 192–194
- Koehler 1993
- Mahoney 1977
- Horrobin 1990
- Proctor, Robert W.; Capaldi, E. John (2006), Why science matters: understanding the methods of psychological research, Wiley-Blackwell, p. 68, ISBN 978-1-4051-3049-3, OCLC 318365881
- Sternberg, Robert J. (2007), "Critical Thinking in Psychology: It really is critical", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 292, ISBN 0-521-60834-1, OCLC 69423179, "Some of the worst examples of confirmation bias are in research on parapsychology ... Arguably, there is a whole field here with no powerful confirming data at all. But people want to believe, and so they find ways to believe."
- Shadish, William R. (2007), "Critical Thinking in Quasi-Experimentation", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 49, ISBN 978-0-521-60834-3
- Jüni, P.; Altman, D. G.; Egger, M. (2001). "Systematic reviews in health care: Assessing the quality of controlled clinical trials". BMJ (Clinical research ed.) 323 (7303): 42–46. PMC 1120670. PMID 11440947.
- Shermer, Michael (July 2006), "The Political Brain", Scientific American, ISSN 0036-8733, retrieved 2009-08-14
- Emerson, G. B.; Warme, W. J.; Wolf, F. M.; Heckman, J. D.; Brand, R. A.; Leopold, S. S. (2010). "Testing for the Presence of Positive-Outcome Bias in Peer Review: A Randomized Controlled Trial". Archives of Internal Medicine 170 (21): 1934–1939. doi:10.1001/archinternmed.2010.406. PMID 21098355.
- Swann, William B.; Pelham, Brett W.; Krull, Douglas S. (1989), "Agreeable Fancy or Disagreeable Truth? Reconciling Self-Enhancement and Self-Verification", Journal of Personality and Social Psychology (American Psychological Association) 57 (5): 782–791, doi:10.1037/0022-35188.8.131.522, ISSN 0022–3514, PMID 2810025
- Swann, William B.; Read, Stephen J. (1981), "Self-Verification Processes: How We Sustain Our Self-Conceptions", Journal of Experimental Social Psychology (Academic Press) 17 (4): 351–372, doi:10.1016/0022-1031(81)90043-3, ISSN 0022–1031
- Story, Amber L. (1998), "Self-Esteem and Memory for Favorable and Unfavorable Personality Feedback", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 24 (1): 51–64, doi:10.1177/0146167298241004, ISSN 1552-7433
- White, Michael J.; Brockett, Daniel R.; Overstreet, Belinda G. (1993), "Confirmatory Bias in Evaluating Personality Test Information: Am I Really That Kind of Person?", Journal of Counseling Psychology (American Psychological Association) 40 (1): 120–126, doi:10.1037/0022-0184.108.40.206, ISSN 0022-0167
- Swann, William B.; Read, Stephen J. (1981), "Acquiring Self-Knowledge: The Search for Feedback That Fits", Journal of Personality and Social Psychology (American Psychological Association) 41 (6): 1119–1128, ISSN 0022–3514
- Shrauger, J. Sidney; Lund, Adrian K. (1975), "Self-evaluation and reactions to evaluations from others", Journal of Personality (Duke University Press) 43 (1): 94–108, doi:10.1111/j.1467-6494.1975.tb00574, PMID 1142062
- Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press, ISBN 0-521-65030-5, OCLC 316403966
- Fine, Cordelia (2006), A Mind of its Own: how your brain distorts and deceives, Cambridge, UK: Icon books, ISBN 1-84046-678-2, OCLC 60668289
- Friedrich, James (1993), "Primary error detection and minimization (PEDMIN) strategies in social cognition: a reinterpretation of confirmation bias phenomena", Psychological Review (American Psychological Association) 100 (2): 298–319, doi:10.1037/0033-295X.100.2.298, ISSN 0033-295X, PMID 8483985
- Goldacre, Ben (2008), Bad Science, London: Fourth Estate, ISBN 978-0-00-724019-7, OCLC 259713114
- Hergovich, Andreas; Schott, Reinhard; Burger, Christoph (2010), "Biased Evaluation of Abstracts Depending on Topic and Conclusion: Further Evidence of a Confirmation Bias Within Scientific Psychology", Current Psychology 29 (3): 188–209, doi:10.1007/s12144-010-9087-5
- Horrobin, David F. (1990), "The philosophical basis of peer review and the suppression of innovation", Journal of the American Medical Association 263 (10): 1438–1441, doi:10.1001/jama.263.10.1438, PMID 2304222
- Kida, Thomas E. (2006), Don't believe everything you think: the 6 basic mistakes we make in thinking, Amherst, New York: Prometheus Books, ISBN 978-1-59102-408-8, OCLC 63297791
- Koehler, Jonathan J. (1993), "The influence of prior beliefs on scientific judgments of evidence quality", Organizational Behavior and Human Decision Processes 56: 28–55, doi:10.1006/obhd.1993.1044
- Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, ISBN 978-0-262-61143-5, OCLC 40618974
- Lewicka, Maria (1998), "Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?", in Kofta, Mirosław; Weary, Gifford; Sedek, Grzegorz, Personal control in action: cognitive and motivational mechanisms, Springer, pp. 233–255, ISBN 978-0-306-45720-3, OCLC 39002877
- Maccoun, Robert J. (1998), "Biases in the interpretation and use of research results", Annual Review of Psychology 49: 259–87, doi:10.1146/annurev.psych.49.1.259, PMID 15012470
- Mahoney, Michael J. (1977), "Publication prejudices: an experimental study of confirmatory bias in the peer review system", Cognitive Therapy and Research 1 (2): 161–175, doi:10.1007/BF01173636
- Nickerson, Raymond S. (1998), "Confirmation Bias; A Ubiquitous Phenomenon in Many Guises", Review of General Psychology (Educational Publishing Foundation) 2 (2): 175–220, doi:10.1037/1089-26220.127.116.11, ISSN 1089-2680
- Oswald, Margit E.; Grosjean, Stefan (2004), "Confirmation Bias", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 79–96, ISBN 978-1-84169-351-4, OCLC 55124398
- Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN 978-0-07-050477-6, OCLC 26931106
- Poletiek, Fenna (2001), Hypothesis-testing behaviour, Hove, UK: Psychology Press, ISBN 978-1-84169-159-6, OCLC 44683470
- Risen, Jane; Gilovich, Thomas (2007), "Informal Logical Fallacies", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, pp. 110–130, ISBN 978-0-521-60834-3, OCLC 69423179
- Vyse, Stuart A. (1997), Believing in magic: The psychology of superstition, New York: Oxford University Press, ISBN 0-19-513634-9, OCLC 35025826
- Stanovich, Keith (2009), What Intelligence Tests Miss: The Psychology of Rational Thought, New Haven (CT): Yale University Press, ISBN 978-0-300-12385-2, lay summary (21 November 2010)
- Westen, Drew (2007), The political brain: the role of emotion in deciding the fate of the nation, PublicAffairs, ISBN 978-1-58648-425-5, OCLC 86117725
- Skeptic's Dictionary: confirmation bias by Robert T. Carroll
- Teaching about confirmation bias, class handout and instructor's notes by K. H. Grobman
- Confirmation bias learning object, interactive number triples exercise by Rod McFarland, Simon Fraser University
- Brief summary of the 1979 Stanford assimilation bias study by Keith Rollag, Babson College | http://en.wikipedia.org/wiki/Confirmation_bias | 13 |
16 | In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that have each distinct key value, and using arithmetic on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum and minimum key values, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. However, it is often used as a subroutine in another sorting algorithm, radix sort, that can handle larger keys more efficiently.
Because counting sort uses key values as indexes into an array, it is not a comparison sort, and the Ω(n log n) lower bound for comparison sorting does not apply to it. Bucket sort may be used for many of the same tasks as counting sort, with a similar time analysis; however, compared to counting sort, bucket sort requires linked lists, dynamic arrays or a large amount of preallocated memory to hold the sets of items within each bucket, whereas counting sort instead stores a single number (the count of items) per bucket.
Input and output assumptions
In the most general case, the input to counting sort consists of a collection of n items, each of which has a non-negative integer key whose maximum value is at most k. In some descriptions of counting sort, the input to be sorted is assumed to be more simply a sequence of integers itself, but this simplification does not accommodate many applications of counting sort. For instance, when used as a subroutine in radix sort, the keys for each call to counting sort are individual digits of larger item keys; it would not suffice to return only a sorted list of the key digits, separated from the items.
In applications such as in radix sort, a bound on the maximum key value k will be known in advance, and can be assumed to be part of the input to the algorithm. However, if the value of k is not already known then it may be computed by an additional loop over the data to determine the maximum key value that actually occurs within the data.
The output is an array of the items, in order by their keys. Because of the application to radix sorting, it is important for counting sort to be a stable sort: if two items have the same key as each other, they should have the same relative position in the output as they did in the input.
The algorithm
In summary, the algorithm loops over the items, computing a histogram of the number of times each key occurs within the input collection. It then performs a prefix sum computation (a second loop, over the range of possible keys) to determine, for each key, the starting position in the output array of the items having that key. Finally, it loops over the items again, moving each item into its sorted position in the output array.
In pseudocode, this may be expressed as follows:
''' calculate histogram: ''' # allocate an array Count[0..k] ; THEN # initialize each array cell to zero ; THEN for each input item x: increment Count[key(x)] ''' calculate starting index for each key: ''' total = 0 for i = 0, 1, ... k: oldCount = Count[i] Count[i] = total total = total + oldCount ''' copy inputs into output array in order: ''' # allocate an output array Output[0..n-1] ; THEN for each input item x: Output[Count[key(x)]] = x increment Count[key(x)] return Output
After the first for loop,
Count[i] stores the number of items with key equal to
i. After the second for loop, it instead stores the number of items with key less than
i, which is the same as the first index at which an item with key
i should be stored in the output array. Throughout the third loop,
Count[i] always stores the next position in the output array into which an item with key
i should be stored, so each item is moved into its correct position in the output array. The relative order of items with equal keys is preserved here; i.e., this is a stable sort.
Because the algorithm uses only simple for loops, without recursion or subroutine calls, it is straightforward to analyze. The initialization of the Count array, and the second for loop which performs a prefix sum on the count array, each iterate at most k + 1 times and therefore take O(k) time. The other two for loops, and the initialization of the output array, each take O(n) time. Therefore the time for the whole algorithm is the sum of the times for these steps, O(n + k).
Because it uses arrays of length k + 1 and n, the total space usage of the algorithm is also O(n + k). For problem instances in which the maximum key value is significantly smaller than the number of items, counting sort can be highly space-efficient, as the only storage it uses other than its input and output arrays is the Count array which uses space O(k).
Variant algorithms
If each item to be sorted is itself an integer, and used as key as well, then the second and third loops of counting sort can be combined; in the second loop, instead of computing the position where items with key
i should be placed in the output, simply append
Count[i] copies of the number
i to the output.
This algorithm may also be used to eliminate duplicate keys, by replacing the
Count array with a bit vector that stores a
one for a key that is present in the input and a
zero for a key that is not present. If additionally the items are the integer keys themselves, both second and third loops can be omitted entirely and the bit vector will itself serve as output, representing the values as offsets of the non-
zero entries, added to the range's lowest value. Thus the keys are sorted and the duplicates are eliminated in this variant just by being placed into the bit array. This is how the Sieve of Eratosthenes works, essentially.
For data in which the maximum key size is significantly smaller than the number of data items, counting sort may be parallelized by splitting the input into subarrays of approximately equal size, processing each subarray in parallel to generate a separate count array for each subarray, and then merging the count arrays. When used as part of a parallel radix sort algorithm, the key size (base of the radix representation) should be chosen to match the size of the split subarrays. The simplicity of the counting sort algorithm and its use of the easily parallelizable prefix sum primitive also make it usable in more fine-grained parallel algorithms.
As described, counting sort is not an in-place algorithm; even disregarding the count array, it needs separate input and output arrays. It is possible to modify the algorithm so that it places the items into sorted order within the same array that was given to it as the input, using only the count array as auxiliary storage; however, the modified in-place version of counting sort is not stable.
- Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), "8.2 Counting Sort", Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, pp. 168–170, ISBN 0-262-03293-7. See also the historical notes on page 181.
- Edmonds, Jeff (2008), "5.2 Counting Sort (a Stable Sort)", How to Think about Algorithms, Cambridge University Press, pp. 72–75, ISBN 978-0-521-84931-9.
- Sedgewick, Robert (2003), "6.10 Key-Indexed Counting", Algorithms in Java, Parts 1-4: Fundamentals, Data Structures, Sorting, and Searching (3rd ed.), Addison-Wesley, pp. 312–314.
- Knuth, D. E. (1998), The Art of Computer Programming, Volume 3: Sorting and Searching (2nd ed.), Addison-Wesley, ISBN 0-201-89685-0. Section 5.2, Sorting by counting, pp. 75–80, and historical notes, p. 170.
- Burris, David S.; Schember, Kurt (1980), "Sorting sequential files with limited auxiliary storage", Proceedings of the 18th annual Southeast Regional Conference, New York, NY, USA: ACM, pp. 23–31, doi:10.1145/503838.503855.
- Zagha, Marco; Blelloch, Guy E. (1991), "Radix sort for vector multiprocessors", Proceedings of Supercomputing '91, November 18-22, 1991, Albuquerque, NM, USA, IEEE Computer Society / ACM, pp. 712–721, doi:10.1145/125826.126164.
- Reif, John H. (1985), "An optimal parallel algorithm for integer sorting", Proc. 26th Annual Symposium on Foundations of Computer Science (FOCS 1985), pp. 496–504, doi:10.1109/SFCS.1985.9.
- Seward, H. H. (1954), "2.4.6 Internal Sorting by Floating Digital Sort", Information sorting in the application of electronic digital computers to business operations, Master's thesis, Report R-232, Massachusetts Institute of Technology, Digital Computer Laboratory, pp. 25–28.
|The Wikibook Algorithm implementation has a page on the topic of: Counting sort|
- Counting Sort html5 visualization
- Demonstration applet from Cardiff University
- Efficient Counting Sort in Haskell
- Kagel, Art S. (2 June 2006), "counting sort", in Black, Paul E., Dictionary of Algorithms and Data Structures, U.S. National Institute of Standards and Technology, retrieved 2011-04-21.
- A simple Counting Sort implementation. | http://en.wikipedia.org/wiki/Counting_sort | 13 |
16 | Dr. David R. Burgess
There are two criteria to consider when evaluating an argument:
There are formal methods to evaluate the first criterion. The possible worlds method is the most general method.
Identifying whether or not the rules for common deductive argument forms
(syllogisms, etc.) are followed would be another
method of deciding if the premises lead to the conclusion.
Many logical fallicies exist
and most beginning students find it helpful to stick to one deductive argument
form, like conditional syllogisms, in order to be certain that the premises
always lead to the conclusion.
- The premises must lead to the conclusion.
- The premises must be true.
The second criterion is often more difficult to determine since it requires
some sort of outside verification. Often the verification will be done by
making observations under controlled circumstances (performing experiments) or
drawing upon the experience of all human kind (using well-established theories).
In order for an argument to establish the truthfulness of a conclusion it must
be a sound argument, the premises must deductively lead to the conclusion and the premises
must be true.
- A deductive argument meets the first criterion, but the
process of determining if it is deductive still leaves the second criterion
undetermined. That is, the premises lead to the conclusion (without
question), but we don't know if the premises are true.
- A valid argument is a deductive argument. Different name,
- A sound argument must satisfy both criterion. The premises
must deductively lead to the conclusion and the premises must be true.
Brief summary of logical fallicies. | http://www.rivier.edu/faculty/dburgess/web/logic/t_logic.htm | 13 |
16 | Synthetic aperture radar
||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
Synthetic-aperture radar (SAR) is a form of radar whose defining characteristic is its use of relative motion, between an antenna and its target region, to provide distinctive long-term coherent-signal variations, that are exploited to obtain finer spatial resolution than is possible with conventional beam-scanning means. It originated as an advanced form of side-looking airborne radar (SLAR).
SAR is usually implemented by mounting, on a moving platform such as an aircraft or spacecraft, a single beam-forming antenna from which a target scene is repeatedly illuminated with pulses of radio waves at wavelengths anywhere from a meter down to millimeters. The many echo waveforms received successively at the different antenna positions are coherently detected and stored and then post-processed together to resolve elements in an image of the target region.
Current (2010) airborne systems provide resolutions to about 10 cm, ultra-wideband systems provide resolutions of a few millimeters, and experimental terahertz SAR has provided sub-millimeter resolution in the laboratory.
SAR images have wide applications in remote sensing and mapping of the surfaces of both the Earth and other planets. SAR can also be implemented as "inverse SAR" by observing a moving target over a substantial time with a stationary antenna.
Relationship to phased arrays
A technique closely related to SAR uses an array (referred to as a "phased array") of real antenna elements spatially distributed over either one or two dimensions perpendicular to the radar-range dimension. These physical arrays are truly synthetic ones, indeed being created by synthesis of a collection of subsidiary physical antennas. Their operation need not involve motion relative to targets. All elements of these arrays receive simultaneously in real time, and the signals passing through them can be individually subjected to controlled shifts of the phases of those signals. One result can be to respond most strongly to radiation received from a specific small scene area, focusing on that area to determine its contribution to the total signal received. The coherently detected set of signals received over the entire array aperture can be replicated in several data-processing channels and processed differently in each. The set of responses thus traced to different small scene areas can be displayed together as an image of the scene.
In comparison, a SAR's (commonly) single physical antenna element gathers signals at different positions at different times. When the radar is carried by an aircraft or an orbiting vehicle, those positions are functions of a single variable, distance along the vehicle’s path, which is a single mathematical dimension (not necessarily the same as a linear geometric dimension). The signals are stored, thus becoming functions, no longer of time, but of recording locations along that dimension. When the stored signals are read out later and combined with specific phase shifts, the result is the same as if the recorded data had been gathered by an equally long and shaped phased array. What is thus synthesized is a set of signals equivalent to what could have been received simultaneously by such an actual large-aperture (in one dimension) phased array. The SAR simulates (rather than synthesizes) that long one-dimensional phased array. Although the term in the title of this article has thus been incorrectly derived, it is now firmly established by half a century of usage.
While operation of a phased array is readily understood as a completely geometric technique, the fact that a synthetic aperture system gathers its data as it (or its target) moves at some speed means that phases which varied with the distance traveled originally varied with time, hence constituted temporal frequencies. Temporal frequencies being the variables commonly used by radar engineers, their analyses of SAR systems are usually (and very productively) couched in such terms. In particular, the variation of phase during flight over the length of the synthetic aperture is seen as a sequence of Doppler shifts of the received frequency from that of the transmitted frequency. It is significant, though, to realize that, once the received data have been recorded and thus have become timeless, the SAR data-processing situation is also understandable as a special type of phased array, treatable as a completely geometric process.
The core of both the SAR and the phased array techniques is that the distances that radar waves travel to and back from each scene element consist of some integer number of wavelengths plus some fraction of a "final" wavelength. Those fractions cause differences between the phases of the re-radiation received at various SAR or array positions. Coherent detection is needed to capture the signal phase information in addition to the signal amplitude information. That type of detection requires finding the differences between the phases of the received signals and the simultaneous phase of a well-preserved sample of the transmitted illumination.
Every wave scattered from any point in the scene has a circular curvature about that point as a center. Signals from scene points at different ranges therefore arrive at a planar array with different curvatures, resulting in signal phase changes which follow different quadratic variations across a planar phased array. Additional linear variations result from points located in different directions from the center of the array. Fortunately, any one combination of these variations is unique to one scene point, and is calculable. For a SAR, the two-way travel doubles that phase change.
In reading the following two paragraphs, be particularly careful to distinguish between array elements and scene elements. Also remember that each of the latter has, of course, a matching image element.
Comparison of the array-signal phase variation across the array with the total calculated phase variation pattern can reveal the relative portion of the total received signal that came from the only scene point that could be responsible for that pattern. One way to do the comparison is by a correlation computation, multiplying, for each scene element, the received and the calculated field-intensity values array element by array element and then summing the products for each scene element. Alternatively, one could, for each scene element, subtract each array element’s calculated phase shift from the actual received phase and then vectorially sum the resulting field-intensity differences over the array. Wherever in the scene the two phases substantially cancel everywhere in the array, the difference vectors being added are in phase, yielding, for that scene point, a maximum value for the sum.
The equivalence of these two methods can be seen by recognizing that multiplication of sinusoids can be done by summing phases which are complex-number exponents of e, the base of natural logarithms.
However it is done, the image-deriving process amounts to "backtracking" the process by which nature previously spread the scene information over the array. In each direction, the process may be viewed as a Fourier transform, which is a type of correlation process. The image-extraction process we use can then be seen as another Fourier transform which is a reversal of the original natural one.
It is important to realize that only those sub-wavelength differences of successive ranges from the transmitting antenna to each target point and back, which govern signal phase, are used to refine the resolution in any geometric dimension. The central direction and the angular width of the illuminating beam do not contribute directly to creating that fine resolution. Instead, they serve only to select the solid-angle region from which usable range data are received. While some distinguishing of the ranges of different scene items can be made from the forms of their sub-wavelength range variations at short ranges, the very large depth of focus that occurs at long ranges usually requires that over-all range differences (larger than a wavelength) be used to define range resolutions comparable to the achievable cross-range resolution.
In a typical SAR application, a single radar antenna is attached to an aircraft or spacecraft so as to radiate a beam whose wave-propagation direction has a substantial component perpendicular to the flight-path direction. The beam is allowed to be broad in the vertical direction so it will illuminate the terrain from nearly beneath the aircraft out toward the horizon.
Resolution in the range dimension of the image is accomplished by creating pulses which define very short time intervals, either by emitting short pulses consisting of a carrier frequency and the necessary sidebands, all within a certain bandwidth, or by using longer "chirp pulses" in which frequency varies (often linearly) with time within that bandwidth. The differing times at which echoes return allow points at different distances to be distinguished.
The total signal is that from a beamwidth-sized patch of the ground. To produce a beam that is narrow in the cross-range direction[clarification needed], diffraction effects require that the antenna be wide in that dimension. Therefore the distinguishing, from each other, of co-range points simply by strengths of returns that persist for as long as they are within the beam width is difficult with aircraft-carryable antennas, because their beams can have linear widths only about two orders of magnitude (hundreds of times) smaller than the range. (Spacecraft-carryable ones can do 10 or more times better.) However, if both the amplitude and the phase of returns are recorded, then the portion of that multi-target return that was scattered radially from any smaller scene element can be extracted by phase-vector correlation of the total return with the form of the return expected from each such element. Careful design and operation can accomplish resolution of items smaller than a millionth of the range, for example, 30 cm at 300 km, or about one foot at nearly 200 miles (320 km).
The process can be thought of as combining the series of spatially distributed observations as if all had been made simultaneously with an antenna as long as the beamwidth and focused on that particular point. The "synthetic aperture" simulated at maximum system range by this process not only is longer than the real antenna, but, in practical applications, it is much longer than the radar aircraft, and tremendously longer than the radar spacecraft.
Image resolution of SAR in its range coordinate (expressed in image pixels per distance unit) is mainly proportional to the radio bandwidth of whatever type of pulse is used. In the cross-range coordinate, the similar resolution is mainly proportional to the bandwidth of the Doppler shift of the signal returns within the beamwidth. Since Doppler frequency depends on the angle of the scattering point's direction from the broadside direction, the Doppler bandwidth available within the beamwidth is the same at all ranges. Hence the theoretical spatial resolution limits in both image dimensions remain constant with variation of range. However, in practice, both the errors that accumulate with data-collection time and the particular techniques used in post-processing further limit cross-range resolution at long ranges.
The conversion of return delay time to geometric range can be very accurate because of the natural constancy of the speed and direction of propagation of electromagnetic waves. However, for an aircraft flying through the never-uniform and never-quiescent atmosphere, the relating of pulse transmission and reception times to successive geometric positions of the antenna must be accompanied by constant adjusting of the return phases to account for sensed irregularities in the flight path. SAR's in spacecraft avoid that atmosphere problem, but still must make corrections for known antenna movements due to rotations of the spacecraft, even those that are reactions to movements of onboard machinery. Locating a SAR in a manned space vehicle may require that the humans carefully remain motionless relative to the vehicle during data collection periods.
Although some references to SARs have characterized them as "radar telescopes", their actual optical analogy is the microscope, the detail in their images being smaller than the length of the synthetic aperture. In radar-engineering terms, while the target area is in the "far field" of the illuminating antenna, it is in the "near field" of the simulated one.
Returns from scatterers within the range extent of any image are spread over a matching time interval. The inter-pulse period must be long enough to allow farthest-range returns from any pulse to finish arriving before the nearest-range ones from the next pulse begin to appear, so that those do not overlap each other in time. On the other hand, the interpulse rate must be fast enough to provide sufficient samples for the desired across-range (or across-beam) resolution. When the radar is to be carried by a high-speed vehicle and is to image a large area at fine resolution, those conditions may clash, leading to what has been called SAR's ambiguity problem. The same considerations apply to "conventional" radars also, but this problem occurs significantly only when resolution is so fine as to be available only through SAR processes. Since the basis of the problem is the information-carrying capacity of the single signal-input channel provided by one antenna, the only solution is to use additional channels fed by additional antennas. The system then becomes a hybrid of a SAR and a phased array, sometimes being called a Vernier Array.
Combining the series of observations requires significant computational resources, usually using Fourier transform techniques. The high digital computing speed now available allows such processing to be done in near-real time on board a SAR aircraft. (There is necessarily a minimum time delay until all parts of the signal have been received.) The result is a map of radar reflectivity, including both amplitude and phase. The amplitude information, when shown in a map-like display, gives information about ground cover in much the same way that a black-and-white photo does. Variations in processing may also be done in either vehicle-borne stations or ground stations for various purposes, so as to accentuate certain image features for detailed target-area analysis.
Although the phase information in an image is generally not made available to a human observer of an image display device, it can be preserved numerically, and sometimes allows certain additional features of targets to be recognized. Unfortunately, the phase differences between adjacent image picture elements ("pixels") also produce random interference effects called "coherence speckle", which is a sort of graininess with dimensions on the order of the resolution, causing the concept of resolution to take on a subtly different meaning. This effect is the same as is apparent both visually and photographically in laser-illuminated optical scenes. The scale of that random speckle structure is governed by the size of the synthetic aperture in wavelengths, and cannot be finer than the system's resolution. Speckle structure can be subdued at the expense of resolution.
Before rapid digital computers were available, the data processing was done using an optical holography technique. The analog radar data were recorded as a holographic interference pattern on photographic film at a scale permitting the film to preserve the signal bandwidths (for example, 1:1,000,000 for a radar using a 0.6-meter wavelength). Then light using, for example, 0.6-micrometer waves (as from a helium-neon laser) passing through the hologram could project a terrain image at a scale recordable on another film at reasonable processor focal distances of around a meter. This worked because both SAR and phased arrays are fundamentally similar to optical holography, but using microwaves instead of light waves. The "optical data-processors" developed for this radar purpose were the first effective analog optical computer systems, and were, in fact, devised before the holographic technique was fully adapted to optical imaging. Because of the different sources of range and across-range signal structures in the radar signals, optical data-processors for SAR included not only both spherical and cylindrical lenses, but sometimes conical ones.
|It has been suggested that portions of this section be moved into Radar image. (Discuss)|
The following considerations apply also to real-aperture terrain-imaging radars, but are more consequential when resolution in range is matched to a cross-beam resolution that is available only from a SAR.
The two dimensions of a radar image are range and cross-range. Radar images of limited patches of terrain can resemble oblique photographs, but not ones taken from the location of the radar. This is because the range coordinate in a radar image is perpendicular to the vertical-angle coordinate of an oblique photo. The apparent entrance-pupil position (or camera center) for viewing such an image is therefore not as if at the radar, but as if at a point from which the viewer's line of sight is perpendicular to the slant-range direction connecting radar and target, with slant-range increasing from top to bottom of the image.
Because slant ranges to level terrain vary in vertical angle, each elevation of such terrain appears as a curved surface, specifically a hyperbolic cosine one. Verticals at various ranges are perpendiculars to those curves. The viewer’s apparent looking directions are parallel to the curve’s "hypcos" axis. Items directly beneath the radar appear as if optically viewed horizontally (i.e., from the side) and those at far ranges as if optically viewed from directly above. These curvatures are not evident unless large extents of near-range terrain, including steep slant ranges, are being viewed.
When viewed as specified above, fine-resolution radar images of small areas can appear most nearly like familiar optical ones, for two reasons. The first reason is easily understood by imagining a flagpole in the scene. The slant-range to its upper end is less than that to its base. Therefore the pole can appear correctly top-end up only when viewed in the above orientation. Secondly, the radar illumination then being downward, shadows are seen in their most-familiar "overhead-lighting" direction.
Note that the image of the pole’s top will overlay that of some terrain point which is on the same slant range arc but at a shorter horizontal range ("ground-range"). Images of scene surfaces which faced both the illumination and the apparent eyepoint will have geometries that resemble those of an optical scene viewed from that eyepoint. However, slopes facing the radar will be foreshortened and ones facing away from it will be lengthened from their horizontal (map) dimensions. The former will therefore be brightened and the latter dimmed.
Returns from slopes steeper than perpendicular to slant range will be overlaid on those of lower-elevation terrain at a nearer ground-range, both being visible but intermingled. This is especially the case for vertical surfaces like the walls of buildings. Another viewing inconvenience that arises when a surface is steeper than perpendicular to the slant range is that it is then illuminated on one face but "viewed" from the reverse face. Then one "sees", for example, the radar-facing wall of a building as if from the inside, while the building’s interior and the rear wall (that nearest to, hence expected to be optically visible to, the viewer) have vanished, since they lack illumination, being in the shadow of the front wall and the roof. Some return from the roof may overlay that from the front wall, and both of those may overlay return from terrain in front of the building. The visible building shadow will include those of all illuminated items. Long shadows may exhibit blurred edges due to the illuminating antenna's movement during the "time exposure" needed to create the image.
Surfaces that we usually consider rough will, if that roughness consists of relief less than the radar wavelength, behave as smooth mirrors, showing, beyond such a surface, additional images of items in front of it. Those mirror images will appear within the shadow of the mirroring surface, sometimes filling the entire shadow, thus preventing recognition of the shadow.
An important fact that applies to SARs but not to real-aperture radars is that the direction of overlay of any scene point is not directly toward the radar, but toward that point of the SAR's current path direction that is nearest to the target point. If the SAR is "squinting" forward or aft away from the exactly broadside direction, then the illumination direction, and hence the shadow direction, will not be opposite to the overlay direction, but slanted to right or left from it. An image will appear with the correct projection geometry when viewed so that the overlay direction is vertical, the SAR's flight-path is above the image, and range increases somewhat downward.
Objects in motion within a SAR scene alter the Doppler frequencies of the returns. Such objects therefore appear in the image at locations offset in the across-range direction by amounts proportional to the range-direction component of their velocity. Road vehicles may be depicted off the roadway and therefore not recognized as road traffic items. Trains appearing away from their tracks are more easily properly recognized by their length parallel to known trackage as well as by the absence of an equal length of railbed signature and of some adjacent terrain, both having been shadowed by the train. While images of moving vessels can be offset from the line of the earlier parts of their wakes, the more recent parts of the wake, which still partake of some of the vessel's motion, appear as curves connecting the vessel image to the relatively quiescent far-aft wake. In such identifiable cases, speed and direction of the moving items can be determined from the amounts of their offsets. The along-track component of a target's motion causes some defocus. Random motions such as that of wind-driven tree foliage, vehicles driven over rough terrain, or humans or other animals walking or running generally render those items not focusable, resulting in blurring or even effective invisibility.
These considerations, along with the speckle structure due to coherence, take some getting used to in order to correctly interpret SAR images. To assist in that, large collections of significant target signatures have been accumulated by performing many test flights over known terrains and cultural objects.
Origin and early development (ca. 1950–1975)
Carl A. Wiley, a mathematician at Goodyear Aircraft Company in Litchfield Park, Arizona, invented synthetic-aperture radar in June 1951 while working on a correlation guidance system for the Atlas ICBM program. In early 1952, Wiley, together with Fred Heisley and Bill Welty, constructed a concept validation system known as DOUSER ("Doppler Unbeamed Search Radar"). During the 1950s and 1960s, Goodyear Aircraft (later Goodyear Aerospace) introduced numerous advancements in SAR technology.
Independently of Wiley's work, experimental trials in early 1952 by Sherwin and others at the University of Illinois' Control Systems Laboratory showed results that they pointed out "could provide the basis for radar systems with greatly improved angular resolution" and might even lead to systems capable of focusing at all ranges simultaneously.
In both of those programs, processing of the radar returns was done by electrical-circuit filtering methods. In essence, signal strength in isolated discrete bands of Doppler frequency defined image intensities that were displayed at matching angular positions within proper range locations. When only the central (zero-Doppler band) portion of the return signals was used, the effect was as if only that central part of the beam existed. That led to the term Doppler Beam Sharpening. Displaying returns from several adjacent non-zero Doppler frequency bands accomplished further "beam-subdividing" (sometimes called "unfocused radar," though it could have been considered "semi-focused"). Wiley's patent, applied for in 1954, still proposed similar processing. The bulkiness of the circuitry then available limited the extent to which those schemes might further improve resolution.
The principle was included in a memorandum authored by Walter Hausz of General Electric that was part of the then-secret report of a 1952 Dept. of Defense summer study conference called TEOTA ("The Eyes of the Army"), which sought to identify new techniques useful for military reconnaissance and technical gathering of intelligence. A follow-on summer program in 1953 at the University of Michigan, called Project Wolverine, identified several of the TEOTA subjects, including Doppler-assisted sub-beamwidth resolution, as research efforts to be sponsored by the Department of Defense (DoD) at various academic and industrial research laboratories. In that same year, the Illinois group produced a "strip-map" image exhibiting a considerable amount of sub-beamwidth resolution.
A more advanced focused-radar project was among several remote sensing schemes assigned in 1953 to Project Michigan, a tri-service-sponsored (Army, Navy, Air Force) program at the University of Michigan's Willow Run Research Center (WRRC), that program being administered by the Army Signal Corps. Initially called the side-looking radar project, it was carried out by a group first known as the Radar Laboratory and later as the Radar and Optics Laboratory. It proposed to take into account, not just the short-term existence of several particular Doppler shifts, but the entire history of the steadily varying shifts from each target as the latter crossed the beam. An early analysis by Dr. Louis J. Cutrona, Weston E. Vivian, and Emmett N. Leith of that group showed that such a fully focused system should yield, at all ranges, a resolution equal to the width (or, by some criteria, the half-width) of the real antenna carried on the radar aircraft and continually pointed broadside to the aircraft's path.
The required data processing amounted to calculating cross-correlations of the received signals with samples of the forms of signals to be expected from unit-amplitude sources at the various ranges. At that time, even large digital computers had capabilities somewhat near the levels of today's four-function handheld calculators, hence were nowhere near able to do such a huge amount of computation. Instead, the device for doing the correlation computations was to be an optical correlator.
It was proposed that signals received by the traveling antenna and coherently detected be displayed as a single range-trace line across the diameter of the face of a cathode-ray tube, the line's successive forms being recorded as images projected onto a film traveling perpendicular to the length of that line. The information on the developed film was to be subsequently processed in the laboratory on equipment still to be devised as a principal task of the project. In the initial processor proposal, an arrangement of lenses was expected to multiply the recorded signals point-by-point with the known signal forms by passing light successively through both the signal film and another film containing the known signal pattern. The subsequent summation, or integration, step of the correlation was to be done by converging appropriate sets of multiplication products by the focusing action of one or more spherical and cylindrical lenses. The processor was to be, in effect, an optical analog computer performing large-scale scalar arithmetic calculations in many channels (with many light "rays") at once. Ultimately, two such devices would be needed, their outputs to be combined as quadrature components of the complete solution.
Fortunately (as it turned out), a desire to keep the equipment small had led to recording the reference pattern on 35 mm film. Trials promptly showed that the patterns on the film were so fine as to show pronounced diffraction effects that prevented sharp final focusing.
That led Leith, a physicist who was devising the correlator, to recognize that those effects in themselves could, by natural processes, perform a significant part of the needed processing, since along-track strips of the recording operated like diametrical slices of a series of circular optical zone plates. Any such plate performs somewhat like a lens, each plate having a specific focal length for any given wavelength. The recording that had been considered as scalar became recognized as pairs of opposite-sign vector ones of many spatial frequencies plus a zero-frequency "bias" quantity. The needed correlation summation changed from a pair of scalar ones to a single vector one.
Each zone plate strip has two equal but oppositely signed focal lengths, one real, where a beam through it converges to a focus, and one virtual, where another beam appears to have diverged from, beyond the other face of the zone plate. The zero-frequency (DC bias) component has no focal point, but overlays both the converging and diverging beams. The key to obtaining, from the converging wave component, focused images that are not overlaid with unwanted haze from the other two is to block the latter, allowing only the wanted beam to pass through a properly positioned frequency-band selecting aperture.
Each radar range yields a zone plate strip with a focal length proportional to that range. This fact became a principal complication in the design of optical processors. Consequently, technical journals of the time contain a large volume of material devoted to ways for coping with the variation of focus with range.
For that major change in approach, the light used had to be both monochromatic and coherent, properties that were already a requirement on the radar radiation. Lasers also then being in the future, the best then-available approximation to a coherent light source was the output of a mercury vapor lamp, passed through a color filter that was matched to the lamp spectrum's green band, and then concentrated as well as possible onto a very small beam-limiting aperture. While the resulting amount of light was so weak that very long exposure times had to be used, a workable optical correlator was assembled in time to be used when appropriate data became available.
Although creating that radar was a more straightforward task based on already-known techniques, that work did demand the achievement of signal linearity and frequency stability that were at the extreme state of the art. An adequate instrument was designed and built by the Radar Laboratory and was installed in a C-46 (Curtiss Commando) aircraft. Because the aircraft was bailed to WRRC by the U. S. Army and was flown and maintained by WRRC's own pilots and ground personnel, it was available for many flights at times matching the Radar Laboratory's needs, a feature important for allowing frequent re-testing and "debugging" of the continually developing complex equipment. By contrast, the Illinois group had used a C-46 belonging to the Air Force and flown by AF pilots only by pre-arrangement, resulting, in the eyes of those researchers, in limitation to a less-than-desirable frequency of flight tests of their equipment, hence a low bandwidth of feedback from tests. (Later work with newer Convair aircraft continued the Michigan group’s local control of flight schedules.)
Michigan's chosen 5-foot (1.5 m)-wide WWII-surplus antenna was theoretically capable of 5-foot (1.5 m) resolution, but data from only 10% of the beamwidth was used at first, the goal at that time being to demonstrate 50-foot (15 m) resolution. It was understood that finer resolution would require the added development of means for sensing departures of the aircraft from an ideal heading and flight path, and for using that information for making needed corrections to the antenna pointing and to the received signals before processing. After numerous trials in which even small atmospheric turbulence kept the aircraft from flying straight and level enough for good 50-foot (15 m) data, one pre-dawn flight in August 1957 yielded a map-like image of the Willow Run Airport area which did demonstrate 50-foot (15 m) resolution in some parts of the image, whereas the illuminated beam width there was 900 feet (270 m). Although the program had been considered for termination by DoD due to what had seemed to be a lack of results, that first success ensured further funding to continue development leading to solutions to those recognized needs.
The SAR principle was first acknowledged publicly via an April 1960 press release about the U. S. Army experimental AN/UPD-1 system, which consisted of an airborne element made by Texas Instruments and installed in a Beech L-23D aircraft and a mobile ground data-processing station made by WRRC and installed in a military van. At the time, the nature of the data processor was not revealed. A technical article in the journal of the IRE (Institute of Radio Engineers) Professional Group on Military Electronics in February 1961 described the SAR principle and both the C-46 and AN/UPD-1 versions, but did not tell how the data were processed, nor that the UPD-1's maximum resolution capability was about 50 feet (15 m). However, the June 1960 issue of the IRE Professional Group on Information Theory had contained a long article on "Optical Data Processing and Filtering Systems" by members of the Michigan group. Although it did not refer to the use of those techniques for radar, readers of both journals could quite easily understand the existence of a connection between articles sharing some authors.
An operational system to be carried in a reconnaissance version of the F-4 "Phantom" aircraft was quickly devised and was used briefly in Vietnam, where it failed to favorably impress its users, due to the combination of its low resolution (similar to the UPD-1's), the speckly nature of its coherent-wave images (similar to the speckliness of laser images), and the poorly understood dissimilarity of its range/cross-range images from the angle/angle optical ones familiar to military photo interpreters. The lessons it provided were well learned by subsequent researchers, operational system designers, image-interpreter trainers, and the DoD sponsors of further development and acquisition.
In subsequent work the technique's latent capability was eventually achieved. That work, depending on advanced radar circuit designs and precision sensing of departures from ideal straight flight, along with more sophisticated optical processors using laser light sources and specially designed very large lenses made from remarkably clear glass, allowed the Michigan group to advance system resolution, at about 5-year intervals, first to 15 feet (4.6 m), then 5 feet (1.5 m), and, by the mid-1970s, to 1 foot (the latter only over very short range intervals while processing was still being done optically). The latter levels and the associated very wide dynamic range proved suitable for identifying many objects of military concern as well as soil, water, vegetation, and ice features being studied by a variety of environmental researchers having security clearances allowing them access to what was then classified imagery. Similarly improved operational systems soon followed each of those finer-resolution steps.
Even the 5-foot (1.5 m) resolution stage had over-taxed the ability of cathode-ray tubes (limited to about 2000 distinguishable items across the screen diameter) to deliver fine enough details to signal films while still covering wide range swaths, and taxed the optical processing systems in similar ways. However, at about the same time, digital computers finally became capable of doing the processing without similar limitation, and the consequent presentation of the images on cathode ray tube monitors instead of film allowed for better control over tonal reproduction and for more convenient image mensuration.
Achievement of the finest resolutions at long ranges was aided by adding the capability to swing a larger airborne antenna so as to more strongly illuminate a limited target area continually while collecting data over several degrees of aspect, removing the previous limitation of resolution to the antenna width. This was referred to as the spotlight mode, which no longer produced continuous-swath images but, instead, images of isolated patches of terrain.
It was understood very early in SAR development that the extremely smooth orbital path of an out-of-the-atmosphere platform made it ideally suited to SAR operation. Early experience with artificial earth satellites had also demonstrated that the Doppler frequency shifts of signals traveling through the ionosphere and atmosphere were stable enough to permit very fine resolution to be achievable even at ranges of hundreds of kilometers. While experimental verification of those facts by a project now referred to as the Quill satellite (still classified after nearly half a century) occurred within the second decade after the initial work began, several of the capabilities for creating useful classified systems did not exist for another two decades.
That seemingly slow rate of advances was often paced by the progress of other inventions, such as the laser, the digital computer, circuit miniaturization, and compact data storage. Once the laser appeared, optical data processing became a fast process because it provided many parallel analog channels, but devising optical chains suited to matching signal focal lengths to ranges proceeded by many stages and turned out to call for some novel optical components. Since the process depended on diffraction of light waves, it required anti-vibration mountings, clean rooms, and highly trained operators. Even at its best, its use of CRTs and film for data storage placed limits on the range depth of images.
At several stages, attaining the frequently over-optimistic expectations for digital computation equipment proved to take far longer than anticipated. For example, the SEASAT system was ready to orbit before its digital processor became available, so a quickly assembled optical recording and processing scheme had to be used to obtain timely confirmation of system operation. In 1978, the first digital SAR processor was developed by the Canadian aerospace company MacDonald Dettwiler (MDA). When its digital processor was finally completed and used, the digital equipment of that time took many hours to create one swath of image from each run of a few seconds of data. Still, while that was a step down in speed, it was a step up in image quality. Modern methods now provide both high speed and high quality.
Although the above specifies the system development contributions of only a few organizations, many other groups had also become players as the value of SAR became more and more apparent. Especially crucial to the organization and funding of the initial long development process was the technical expertise and foresight of a number of both civilian and uniformed project managers in equipment procurement agencies in the federal government, particularly, of course, ones in the armed forces and in the intelligence agencies, and also in some civilian space agencies.
|This section does not cite any references or sources. (September 2009)|
The SAR algorithm, as given here, applies to phased arrays generally.
A three-dimensional array (a volume) of scene elements is defined which will represent the volume of space within which targets exist. Each element of the array is a cubical voxel representing the probability (a "density") of a reflective surface being at that location in space. (Note that two-dimensional SARs are also possible—showing only a top-down view of the target area).
Initially, the SAR algorithm gives each voxel a density of zero.
Then, for each captured waveform, the entire volume is iterated. For a given waveform and voxel, the distance from the position represented by that voxel to the antenna(e) used to capture that waveform is calculated. That distance represents a time delay into the waveform. The sample value at that position in the waveform is then added to the voxel's density value. This represents a possible echo from a target at that position. Note that there are several optional approaches here, depending on the precision of the waveform timing, among other things. For example, if phase cannot be accurately known, then only the envelope magnitude (with the help of a Hilbert transform) of the waveform sample might be added to the voxel. If polarization and phase are known in the waveform, and are accurate enough, then these values might be added to a more complex voxel that holds such measurements separately.
After all waveforms have been iterated over all voxels, the basic SAR processing is complete.
What remains, in the simplest approach, is to decide what voxel density value represents a solid object. Voxels whose density is below that threshold are ignored. Note that the threshold level chosen must at least be higher than the peak energy of any single wave—otherwise that wave peak would appear as a sphere (or ellipse, in the case of multistatic operation) of false "density" across the entire volume. Thus to detect a point on a target, there must be at least two different antenna echoes from that point. Consequently, there is a need for large numbers of antenna positions to properly characterize a target.
The voxels that passed the threshold criteria are visualized in 2D or 3D. Optionally, added visual quality can sometimes be had by use of a surface detection algorithm like marching cubes.
More complex operation
The basic design of a synthetic-aperture radar system can be enhanced to collect more information. Most of these methods use the same basic principle of combining many pulses to form a synthetic aperture, but may involve additional antennae or significant additional processing.
SAR requires that echo captures be taken at multiple antenna positions. The more captures taken (at different antenna locations) the more reliable the target characterization.
Multiple captures can be obtained by moving a single antenna to different locations, by placing multiple stationary antennae at different locations, or combinations thereof.
The advantage of a single moving antenna is that it can be easily placed in any number of positions to provide any number of monostatic waveforms. For example, an antenna mounted on an airplane takes many captures per second as the plane travels.
The principal advantages of multiple static antennae are that a moving target can be characterized (assuming the capture electronics are fast enough), that no vehicle or motion machinery is necessary, and that antenna positions need not be derived from other, sometimes unreliable, information. (One problem with SAR aboard an airplane is knowing precise antenna positions as the plane travels).
For multiple static antennae, all combinations of monostatic and multistatic radar waveform captures are possible. Note, however, that it is not advantageous to capture a waveform for each of both transmission directions for a given pair of antennae, because those waveforms will be identical. When multiple static antennae are used, the total number of unique echo waveforms that can be captured is
where N is the number of unique antenna positions.
Radar waves have a polarization. Different materials reflect radar waves with different intensities, but anisotropic materials such as grass often reflect different polarizations with different intensities. Some materials will also convert one polarization into another. By emitting a mixture of polarizations and using receiving antennae with a specific polarization, several images can be collected from the same series of pulses. Frequently three such RX-TX polarizations (HH-pol, VV-pol, VH-pol) are used as the three color channels in a synthesized image. This is what has been done in the picture at left. Interpretation of the resulting colors requires significant testing of known materials.
New developments in polarimetry include using the changes in the random polarization returns of some surfaces (such as grass or sand) and between two images of the same location at different times to determine where changes not visible to optical systems occurred. Examples include subterranean tunneling or paths of vehicles driving through the area being imaged. Enhanced SAR sea oil slick observation has been developed by appropriate physical modelling and use of fully polarimetric and dual-polarimetric measurements.
Rather than discarding the phase data, information can be extracted from it. If two observations of the same terrain from very similar positions are available, aperture synthesis can be performed to provide the resolution performance which would be given by a radar system with dimensions equal to the separation of the two measurements. This technique is called Interferometric SAR or InSAR.
If the two samples are obtained simultaneously (perhaps by placing two antennas on the same aircraft, some distance apart), then any phase difference will contain information about the angle from which the radar echo returned. Combining this with the distance information, one can determine the position in three dimensions of the image pixel. In other words, one can extract terrain altitude as well as radar reflectivity, producing a digital elevation model (DEM) with a single airplane pass. One aircraft application at the Canada Centre for Remote Sensing produced digital elevation maps with a resolution of 5 m and altitude errors also on the order of 5 m. Interferometry was used to map many regions of the Earth's surface with unprecedented accuracy using data from the Shuttle Radar Topography Mission.
If the two samples are separated in time, perhaps from two flights over the same terrain, then there are two possible sources of phase shift. The first is terrain altitude, as discussed above. The second is terrain motion: if the terrain has shifted between observations, it will return a different phase. The amount of shift required to cause a significant phase difference is on the order of the wavelength used. This means that if the terrain shifts by centimeters, it can be seen in the resulting image (a digital elevation map must be available to separate the two kinds of phase difference; a third pass may be necessary to produce one).
This second method offers a powerful tool in geology and geography. Glacier flow can be mapped with two passes. Maps showing the land deformation after a minor earthquake or after a volcanic eruption (showing the shrinkage of the whole volcano by several centimeters) have been published.
Differential interferometry (D-InSAR) requires taking at least two images with addition of a DEM. The DEM can be either produced by GPS measurements or could be generated by interferometry as long as the time between acquisition of the image pairs is short, which guarantees minimal distortion of the image of the target surface. In principle, 3 images of the ground area with similar image acquisition geometry is often adequate for D-InSar. The principle for detecting ground movement is quite simple. One interferogram is created from the first two images; this is also called the reference interferogram or topographical interferogram. A second interferogram is created that captures topography + distortion. Subtracting the latter from the reference interferogram can reveal differential fringes, indicating movement. The described 3 image D-InSAR generation technique is called 3-pass or double-difference method.
Differential fringes which remain as fringes in the differential interferogram are a result of SAR range changes of any displaced point on the ground from one interferogram to the next. In the differential interferogram, each fringe is directly proportional to the SAR wavelength, which is about 5.6 cm for ERS and RADARSAT single phase cycle. Surface displacement away from the satellite look direction causes an increase in path (translating to phase) difference. Since the signal travels from the SAR antenna to the target and back again, the measured displacement is twice the unit of wavelength. This means in differential interferometry one fringe cycle −π to +π or one wavelength corresponds to a displacement relative to SAR antenna of only half wavelength (2.8 cm). There are various publications on measuring subsidence movement, slope stability analysis, landslide, glacier movement, etc. tooling D-InSAR. Further advancement to this technique whereby differential interferometry from satellite SAR ascending pass and descending pass can be used to estimate 3-D ground movement. Research in this area has shown accurate measurements of 3-D ground movement with accuracies comparable to GPS based measurements can be achieved.
Conventional radar systems emit bursts of radio energy with a fairly narrow range of frequencies. A narrow-band channel, by definition, does not allow rapid changes in modulation. Since it is the change in a received signal that reveals the time of arrival of the signal (obviously an unchanging signal would reveal nothing about "when" it reflected from the target), a signal with only a slow change in modulation cannot reveal the distance to the target as well as can a signal with a quick change in modulation.
Ultra-wideband (UWB) refers to any radio transmission that uses a very large bandwidth – which is the same as saying it uses very rapid changes in modulation. Although there is no set bandwidth value that qualifies a signal as "UWB", systems using bandwidths greater than a sizable portion of the center frequency (typically about ten percent, or so) are most often called "UWB" systems. A typical UWB system might use a bandwidth of one-third to one-half of its center frequency. For example, some systems use a bandwidth of about 1 GHz centered around 3 GHz.
There are as many ways to increase the bandwidth of a signal as there are forms of modulation – it is simply a matter of increasing the rate of that modulation. However, the two most common methods used in UWB radar, including SAR, are very short pulses and high-bandwidth chirping. A general description of chirping appears elsewhere in this article. The bandwidth of a chirped system can be as narrow or as wide as the designers desire. Pulse-based UWB systems, being the more common method associated with the term "UWB radar", are described here.
A pulse-based radar system transmits very short pulses of electromagnetic energy, typically only a few waves or less. A very short pulse is, of course, a very rapidly changing signal, and thus occupies a very wide bandwidth. This allows far more accurate measurement of distance, and thus resolution.
The main disadvantage of pulse-based UWB SAR is that the transmitting and receiving front-end electronics are difficult to design for high-power applications. Specifically, the transmit duty cycle is so exceptionally low and pulse time so exceptionally short, that the electronics must be capable of extremely high instantaneous power to rival the average power of conventional radars. (Although it is true that UWB provides a notable gain in channel capacity over a narrow band signal because of the relationship of bandwidth in the Shannon–Hartley theorem and because the low receive duty cycle receives less noise, increasing the signal-to-noise ratio, there is still a notable disparity in link budget because conventional radar might be several orders of magnitude more powerful than a typical pulse-based radar.) So pulse-based UWB SAR is typically used in applications requiring average power levels in the microwatt or milliwatt range, and thus is used for scanning smaller, nearer target areas (several tens of meters), or in cases where lengthy integration (over a span of minutes) of the received signal is possible. Note, however, that this limitation is solved in chirped UWB radar systems.
The principal advantages of UWB radar are better resolution (a few millimeters using commercial off-the-shelf electronics) and more spectral information of target reflectivity.
Doppler Beam Sharpening commonly refers to the method of processing unfocused real-beam phase history to achieve better resolution than could be achieved by processing the real beam without it. Because the real aperture of the RADAR antenna is so small (compared to the wavelength in use), the RADAR energy spreads over a wide area (usually many degrees wide in a direction orthogonal (at right angles) to the direction of the platform (aircraft). Doppler-beam sharpening takes advantage of the motion of the platform in that targets ahead of the platform return a Doppler upshifted signal (slightly higher in frequency) and targets behind the platform return a Doppler downshifted signal (slightly lower in frequency).
The amount of shift varies with the angle forward or backward from the ortho-normal direction. By knowing the speed of the platform, target signal return is placed in a specific angle "bin" that changes over time. Signals are integrated over time and thus the RADAR "beam" is synthetically reduced to a much smaller aperture – or more accurately (and based on the ability to distinguish smaller Doppler shifts) the system can have hundreds of very "tight" beams concurrently. This technique dramatically improves angular resolution; however, it is far more difficult to take advantage of this technique for range resolution. (See Pulse-doppler radar).
Chirped (pulse-compressed) radars
A common technique for many radar systems (usually also found in SAR systems) is to "chirp" the signal. In a "chirped" radar, the pulse is allowed to be much longer. A longer pulse allows more energy to be emitted, and hence received, but usually hinders range resolution. But in a chirped radar, this longer pulse also has a frequency shift during the pulse (hence the chirp or frequency shift). When the "chirped" signal is returned, it must be correlated with the sent pulse. Classically, in analog systems, it is passed to a dispersive delay line (often a SAW device) that has the property of varying velocity of propagation based on frequency. This technique "compresses" the pulse in time – thus having the effect of a much shorter pulse (improved range resolution) while having the benefit of longer pulse length (much more signal returned). Newer systems use digital pulse correlation to find the pulse return in the signal.
Highly accurate data can be collected by aircraft overflying the terrain in question. In the 1980s, as a prototype for instruments to be flown on the NASA Space Shuttles, NASA operated a synthetic-aperture radar on a NASA Convair 990. In 1986, this plane caught fire on takeoff. In 1988, NASA rebuilt a C, L, and P-band SAR to fly on the NASA DC-8 aircraft. Called AIRSAR, it flew missions at sites around the world until 2004. Another such aircraft, the Convair 580, was flown by the Canada Center for Remote Sensing until about 1996 when it was handed over to Environment Canada due to budgetary reasons. Most land-surveying applications are now carried out by satellite observation. Satellites such as ERS-1/2, JERS-1, Envisat ASAR, and RADARSAT-1 were launched explicitly to carry out this sort of observation. Their capabilities differ, particularly in their support for interferometry, but all have collected tremendous amounts of valuable data. The Space Shuttle also carried synthetic-aperture radar equipment during the SIR-A and SIR-B missions during the 1980s, the Shuttle Radar Laboratory (SRL) missions in 1994 and the Shuttle Radar Topography Mission in 2000.
Synthetic-aperture radar was first used by NASA on JPL's Seasat oceanographic satellite in 1978 (this mission also carried an altimeter and a scatterometer); it was later developed more extensively on the Spaceborne Imaging Radar (SIR) missions on the space shuttle in 1981, 1984 and 1994. The Cassini mission to Saturn is currently using SAR to map the surface of the planet's major moon Titan, whose surface is partly hidden from direct optical inspection by atmospheric haze. The SHARAD sounding radar on the Mars Reconnaissance Orbiter and MARSIS instrument on Mars Express have observed bedrock beneath the surface of the Mars polar ice and also indicated the likelihood of substantial water ice in the Martian middle latitudes. The Lunar Reconnaissance Orbiter, launched in 2009, carries a SAR instrument called Mini-RF, which was designed largely to look for water ice deposits on the poles of the Moon.
The Mineseeker Project is designing a system for determining whether regions contain landmines based on a blimp carrying ultra-wideband synthetic-aperture radar. Initial trials show promise; the radar is able to detect even buried plastic mines.
SAR has been used in radio astronomy for many years to simulate a large radio telescope by combining observations taken from multiple locations using a mobile antenna.
- Terrestrial SAR Interferometry (TInSAR)
- Radar MASINT
- SAR Lupe
- Remote sensing
- Earth observation satellite
- Magellan space probe
- Inverse synthetic aperture radar (ISAR)
- Synthetic array heterodyne detection (SAHD)
- Aperture synthesis
- Synthetic aperture sonar
- Synthetically thinned aperture radar
- Very Long Baseline Interferometry (VLBI)
- Interferometric synthetic aperture radar (InSAR)
- speckle noise
- Wave radar
- "Synthetic Aperture Radar", L. J. Cutrona, Chapter 23 (25 pp) of the McGraw Hill "Radar Handbook", 1970. (Written while optical data processing was still the only workable method, by the person who first led that development.)
- "A short history of the Optics Group of the Willow Run Laboratories," Emmett N. Leith, in Trends in Optics: Research, Development, and Applications (book), Anna Consortini, Academic Press, San Diego: 1996.
- "Sighted Automation and Fine Resolution Imaging", W. M. Brown, J. L. Walker, and W. R. Boario, IEEE Transactions on Aerospace and Electronic Systems, Vol. 40, No. 4, October 2004, pp 1426–1445.
- "In Memory of Carl A. Wiley," A. W. Love, IEEE Antennas and Propagation Society Newsletter, pp 17–18, June 1985.
- "Synthetic Aperture Radars: A Paradigm for Technology Evolution", C. A. Wiley, IEEE Transactions on Aerospace and Electronic Systems, v. AES-21, n. 3, pp 440–443, May 1985
- Gart, Jason H. "Electronics and Aerospace Industry in Cold War Arizona, 1945–1968: Motorola, Hughes Aircraft, Goodyear Aircraft." Phd diss., Arizona State University, 2006.
- "Some Early Developments in Synthetic Aperture Radar Systems," C. W. Sherwin, J. P. Ruina, and R. D. Rawcliffe, IRE Transactions on Military Electronics, April 1962, pp. 111–115.
- This memo was one of about 20 published as a volume subsidiary to the following reference. No unclassified copy has yet been located. Hopefully, some reader of this article may come across a still existing one.
- "Problems of Battlefield Surveillance", Report of Project TEOTA (The Eyes Of The Army), 1 May 1953, Office of the Chief Signal Officer. Defense Technical Information Center (Document AD 32532)
- "A Doppler Technique for Obtaining Very Fine Angular Resolution from a Side-Looking Airborne Radar" Report of Project Michigan No. 2144-5-T, The University of Michigan, Willow Run Research Center, July 1954. (No declassified copy of this historic originally confidential report has yet been located.)
- "High-Resolution Radar Achievements During Preliminary Flight Tests", W. A. Blikken and G.O. Hall, Institute of Science and Technology, Univ. of Michigan, 01 Sept 1957. Defense Technical Information Center (Document AD148507)
- "A High-Resolution Radar Combat-Intelligence System", L. J. Cutrona, W. E. Vivian, E. N. Leith, and G. O Hall; IRE Transactions on Military Electronics, April 1961, pp 127–131
- "Optical Data Processing and Filtering Systems", L. J. Cutrona, E. N. Leith, C. J. Palermo, and L. J. Porcello; IRE Transactions on Information Theory, June 1960, pp 386–400.
- Quill (satellite)
- "Observation of the earth and its environment: survey of missions and sensors," Herbert J. Kramer
- "Principles of Synthetic Aperture Radar", S. W. McCandless and C. R. Jackson, Chapter 1 of "SAR Marine Users Manual", NOAA, 2004, p.11.
- The first and definitive monograph on SAR is Synthetic Aperture Radar: Systems and Signal Processing (Wiley Series in Remote Sensing and Image Processing) by John C. Curlander and Robert N. McDonough
- The development of synthetic-aperture radar (SAR) is examined in Gart, Jason H. "Electronics and Aerospace Industry in Cold War Arizona, 1945–1968: Motorola, Hughes Aircraft, Goodyear Aircraft." Phd diss., Arizona State University, 2006.
- A text that includes an introduction on SAR suitable for beginners is "Introduction to Microwave Remote Sensing" by Iain H Woodhouse, CRC Press, 2006.
||This article's use of external links may not follow Wikipedia's policies or guidelines. (November 2010)|
- NHAZCA – Natural Hazards Control and Assessment
- Publication: SAR simulation (Electromagnetic simulation software for SAR imagery studies: www.oktal-se.fr)
- Sandia National Laboratories SAR Page (Home of miniSAR, smallest hi-res SAR)
- The Imaging Radar Home Page (NASA SAR missions)
- InSAR measurements from the Space Shuttle
- JPL InSAR Images
- GeoSAR – GeoSAR is a dualband XBand and PBand system owned and operated by Fugro EarthData
- Airborne Synthetic-Aperture Radar (AIRSAR) ) (NASA Airborne SAR)
- The CCRS airborne SAR page (Canadian airborne missions)
- RADARSAT international (Canadian radar satellites)
- The ERS missions (European radar satellites)
- The ENVISAT mission (ESA's most recent SAR satellite)
- Earth Snapshot – Web Portal dedicated to Earth Observation. Includes commented satellite images, information on storms, hurricanes, fires and meteorological phenomena.
- The JERS satellites (Japanese radar satellites)
- Images from the Space Shuttle SAR instrument
- The Alaska Satellite Facility has numerous technical documents, including an introductory text on SAR theory and scientific applications
- BYU SAR projects and images Images from BYU's three SAR systems (YSAR, YINSAR, μSAR)
- NSSDC Master Catalog information on Venera 15 and 16
- NSSDC Master Catalog information on Magellan Mission
- PolsarPro Open Source Polarimetric SAR Processing Toolbox sponsored by ESA.
- Next ESA SAR Toolbox for viewing and analyzing SAR Level 1 data and higher from various missions
- Przemysłowy Instytut Telekomunikacji S.A.
- Birsen Yazici's SAR related publications at Rensselaer Polytechnic Institute | http://en.wikipedia.org/wiki/Synthetic_aperture_radar | 13 |
15 | Darwin's convincing arguments for natural selection as a mechanism for species divergence laid the foundation for additional work in evolutionary theory. The early twentieth-century rediscovery of the work of Moravian monastic Gregor Mendel regarding the inheritance of physical traits in pea plants and the attempt to reconcile it with Darwin's mechanism of natural selection became the focus of many of the finest scientific minds of the period.
The successful reconciliation of the two concepts led to what is commonly called the Modern Evolutionary Synthesis, a fruitful period of research and theory unification extending from the early 1930s through the 1950s. Some of the principal scientists who contributed significantly to the Modern Evolutionary Synthesis were Theodosius Dobzhansky, Ernst Mayr, George Gaylord Simpson, and Julian Huxley.
Theodosius Dobzhansky was a Ukrainian geneticist, evolutionary biologist, and a critical figure in formulating the modern evolutionary synthesis. When he immigrated to the United States from the Soviet Union in 1927, Dobzhansky worked with Thomas Hunt Morgan using the fruit fly, drosophila melanogaster, as a genetic model for inducing hereditary mutations through radiation. The idea that it is through mutations in genes that natural selection occurs was Dobzhansky's most important contribution to genetics and evolutionary biology.
In his 1937 book, Genetics and the Origin of Species, Dobzhansky strongly made the case for the reconciliation of evolutionary biology and genetics. A devoted believer in evolution and Darwin's theory of natural selection, Dobzhansky wrote a final scientific essay shortly before his death entitled, "Nothing in Biology Makes Sense Except in the Light of Evolution."
Ernst Mayr was a German taxonomist, ornithologist, and evolutionary biologist. His role in the modern evolutionary synthesis is based upon his refining the definition of "species" from a member of a structurally or morphologically similar group to a population that can breed only among itself. This new definition solved the species problem that had defied naturalists and biologists since before the time of Darwin and into the mid-twentieth century. Mayr believed that geographical isolation was the most important factor for the formation of new and individual species.
Mayr, in his 1942 book, Systematics and the Origin of Species from the Viewpoint of a Zoologist, theorized that the differences in organisms (finches among others) so readily apparent to Darwin in the Galapagos islands was due to small sub-populations isolated on individual islands undergoing rapid variance through the mechanisms of natural selection and evolving into new distinct species.
G. Ledyard Stebbins was an American botanist and geneticist broadly regarded as one of the foremost unifiers of the modern evolutionary synthesis. Variation and Evolution in Plants, published by Stebbins in 1950, helped to reconcile modern genetics and Darwinian natural selection for the purpose of explaining plant evolution. Variation and Evolution in Plants clarified the evolutionary mechanisms at work in plants at the genetic level. Although the volume contained, by the author's own admission, more synthesis than original research it widely influenced evolutionary biologists of the time and is still, after 60 years, an important book.
The book's place in the modern evolutionary synthesis is secured by its success in bringing plant evolution and animal evolution into a common model. The book also virtually eliminated any credible support for alternative evolutionary mechanisms in plants, such as Jean-Baptiste Lamarck's acquired characteristics or soft inheritance which were still defended by some plant scientists as late as the 1950s.
George Gaylord Simpson was an American paleontologist and the most influential practitioner of vertebrate paleontology of his time as well as a premier figure in the modern evolutionary synthesis. Simpson was an authority on extinct mammals and their migrations and viewed paleontology with its long timeline as a field well suited to the study of evolution.
In his major book published in 1944, Tempo and Mode in Evolution, Simpson divided evolutionary change into "tempo" (rates of change) and "mode" (manner or mechanism of change) with tempo a regulating factor of mode. Simpson's primary contention was that the fossil record clearly and inarguably supports Darwin's theory of natural selection. Natural selection is the mechanism acting randomly on variations in a population and it acts as the driving force behind evolutionary change.
Julian Huxley was the grandson of T. H. Huxley, the friend and champion of Charles Darwin and his theory of natural selection. Julian Huxley was also an English evolutionary biologist, writer, and populizer of the life sciences. Furthermore he was one of the most articulate and outspoken proponents of natural selection and the modern evolutionary synthesis. Huxley's contributions to new scientific knowledge were not of great importance when compared with his work to synthesize scientific findings into general principles that could be clearly communicated to non-specialists. It was Huxley who coined the terms "the new synthesis" and "evolutionary synthesis" in his most well-known and influential book, Evolution: the Modern Synthesis (1942).
Darwinism had declined as a central principle of biology since the late nineteenth century with many biologists, especially those conducting research in genetics, abandoning natural selection as a primary mechanism in evolution. Huxley's book fashioned an updated version of Darwinism that embraced genetics and research findings from other life science disciplines. Huxley's greatest originality in formulating the new, evolutionary synthesis was his understanding and use of supporting arguments for the theory of natural selection that came from the new mathematically oriented population geneticists such as J. B. S. Haldane, R. A. Fisher, and Sewall Wright.
R. A. Fisher was a superb mathematician who made seminal contributions to evolutionary biology and was a pioneer in the statistical analysis of designed experiments. After graduating from Cambridge University in 1912 and upon failing the medical examination to allow him to serve in Great Britain's army during the First World War, Fisher worked as a statistician for the city of London and then the Rothamsted Experimental Station in Hertfordshire, England. His first book, Statistical Methods for Research Workers (1925), is still used by agricultural biologists. Fisher was knighted in 1952 for his contribution to English science.
The generation before Fisher had come to believe that Darwin's natural selection was an ineffective process of evolution because Darwin was wrong in supposing that the traits of parents were faithfully reproduced in their offspring. Rather, inheritance obeyed its own genetic laws, as discovered by Gregor Mendel in 1864.
Fisher demonstrated that natural selection, acting on a large, genetically varied population obeying Mendel's laws of inheritance, in fact produced a diffusion of adaptive genes.
The key for Fisher's reconciliation of natural selection and Mendelian inheritance was two-fold: mathematically distinguishing between variants that are genetically heritable from those that are not; and applying statistics to understand how multiple genes affect the heritable traits to produce the continuous array of variations found in nature. He is best-known among evolutionary biologists for his "fundamental theorem of natural selection."
J. B. S. Haldane was one of the great polymaths of the twentieth century. He was an infantry officer in France during the First World War, an evolutionary theorist of profound influence, a biochemist, and an acclaimed writer of popular science and science fiction. He spent his later years in India where he eventually became a citizen. His great contribution to the Modern Synthesis, The Causes of Evolution, was widely read in part because of his skillful prose and forceful argumentation. Although mathematical at its core, Haldane tucked the mathematical proofs into the book's appendix.
Haldane's main thesis is that natural selection is the main cause of evolution. One might have thought that Charles Darwin proved just that seventy years prior. But, in Haldane's day, biologists were increasingly convinced that natural selection was ineffective because inheritance obeyed its own rules as discovered by Gregor Mendel in 1864. Whether or not a parent is to inherit a trait from its parents depends on the rules for genetic inheritance not, as Darwin thought, on whether the trait is selectively advantageous.
Haldane provided a quantitative account of how, in fact, those genetic varieties (transmitted according to Mendelian rules) that enjoyed a slight competitive advantage would after many generation predominate a population. This vindicated natural selection as a predominate cause of evolution.
Thomas Hunt Morgan was an American embryologist and geneticist who was awarded the Nobel Prize in Physiology or Medicine in 1933, the first time the prize was presented for work in the field of genetics. His most important contribution to genetics and evolutionary biology was to demonstrate that actual individual genes are physically located on specific chromosomes in the nucleus of cells.
Morgan was initially doubtful of the physical reality of Gregor Mendel's units of inheritance, genes, and Charles Darwin's theory of natural selection. However, based on his work with fruit flies (drosophila melanogaster), Morgan became convinced of the compatible and complementary nature of Darwin's natural selection and Mendel's inheritance theory in transforming species. Morgan presented a series of lectures at Princeton University in 1916, later published as A Critique of the Theory of Evolution, which unified the Mendelian theory of inheritance and the Darwinian theory of natural selection. Morgan and his students at Columbia University are credited with establishing the "Mendelian-chromosome" theory. His pioneering work provided theory and tools for later geneticists and biologists to formulate the revised version of Darwin's theory of natural selection that became know as the Modern Evolutionary Synthesis.
This letter from Charles Darwin is one of acknowledgement and gratitude to an unnamed correspondent. Some evidence indicates the letter was sent to Charles Lyell for the gift of a fourth edition of his The Geological Evidences of the Antiquity of Man.
The stationary is from Darwin's estate, Down House, in Kent. The signature "Ch. Darwin" and the handwriting of the body of the letter do not match and thus the note was likely written by an assistant for Darwin's signature.
Sir Richard Owen was a British naturalist, comparative anatomist, paleontologist, and a vocal and constant critic of Charles Darwin's theory of natural selection. Darwin and Owen developed an uneasy and distrustful relationship after Owen published some scathing reviews of The Origin of Species.
Owen's most enduring legacy was championing and lobbying for the British Museum of Natural History in London. He was the first director of the museum upon its establishment in 1881.
This letter discusses the location of a rich fossil deposit in Argentina and Darwin urges Owen to seek support from the British government in this regard. | http://mulibraries.missouri.edu/specialcollections/exhibits/darwin/genes.htm | 13 |
20 | Higher Order Thinking
As students grow older, they are asked by their teachers to do more and more with the information they have stored in their brains. These types of requests require accessing higher order thinking (HOT).
In this article:
Most of us don't think about thinking — we just do it. But educators, parents, and legislators have been thinking more about thinking, and thinking about how we want teachers to teach our students to think.
As students move from elementary to middle to high school, they are asked by their teachers to do more and more with the information they have stored in their brains. They may ask students to write a new ending for a book they've been reading, or they may ask why a certain character in the story behaved in a particular way. If they are studying sound in science, students might be asked to design and construct a new kind of musical instrument. In language arts, they may be asked to compare and contrast Julius Caesar and Adolph Hitler, or to talk about the lessons Nazism holds for world events today. These types of requests require higher order thinking.
Higher order thinking may seem easy for some students, but difficult for others. But here's the good news: (1) higher order thinking, like most skills, can be learned; and (2) with practice, a person's higher order thinking skill level can increase.
What is higher order thinking?
Higher order thinking is thinking on a level that is higher than memorizing facts or telling something back to someone exactly the way it was told to you. When a person memorizes and gives back the information without having to think about it, we call that rote memory. That's because it's much like a robot; it does what it's programmed to do, but it doesn't think for itself.
Higher order thinking, or "HOT" for short, takes thinking to higher levels than restating the facts. HOT requires that we do something with the facts. We must understand them, infer from them, connect them to other facts and concepts, categorize them, manipulate them, put them together in new or novel ways, and apply them as we seek new solutions to new problems. Following are some ways to access higher order thinking.
To understand a group of facts, it is important to understand the conceptual "family" to which this group of facts belongs. A concept is an idea around which a group of ideas revolves — a mental representation of a group of facts or ideas that somehow belong together. Concepts helps us to organize our thinking.
Football, basketball, tennis, swimming, boxing, soccer, or archery all fit the concept of sports. In addition, a person might also group these sports into two more specific concept categories: team sports, such as football, basketball, and soccer; and individual sports, such as tennis, swimming, boxing, and archery.
Concepts can represent objects, activities, or living things. They may also represent properties such as color, texture, and size (for example, blue, smooth, and tiny); things that are abstract (for example, faith, hope, and charity); and relations (for example, brighter than and faster than). Concepts come in a variety of forms, including concrete, abstract, verbal, nonverbal, and process.
- Concrete or Abstract
Concrete concepts are those that we can see, touch, hear, taste, or smell. Dogs, chairs, telephones and hamburgers are examples of concrete concepts. Abstract concepts can be used and thought about, but we cannot use our senses to recognize them as we can with concrete concepts. In order to understand abstract concepts, we either have to experience them or compare them to something else we already know. Imagination, friendship, freedom, and jealousy are examples of abstract concepts. Concrete concepts are generally easier to understand than abstract ones because a person can actually see or touch concrete concepts. However, as students move from elementary to middle to high school, they need to be able to grasp more and more abstract concepts. Not only are abstract concepts harder for students to learn, but they are also harder for teachers to teach.
- Verbal or Nonverbal
Verbal concepts are those that use language to explain them. Verbal concepts are described by using words, such as love, habitat, and peace. A concept may be both abstract and verbal, such as democracy, or both concrete and verbal, such as tool. Nonverbal concepts are those that lend themselves to being easily understood by being pictured or visualized, such as circle, cup, and evaporation.
Many times both verbal and non-verbal concepts can be used to explain something. While many people prefer one over the other, it is good to think about a concept both by picturing it and by describing it with words. Constructing both visual and verbal representations yields a more thorough understanding of the concept.
Process concepts are those that explain how things happen or work. They often include a number of steps that a person must understand in order to master the concept as a whole. Photosynthesis is an example of a process concept in science. The photosynthesis process has certain steps that must take place in a certain order. Math and science courses use process concepts frequently.
When a student is exposed to a new concept, it is important to connect the new concept to concepts he already knows. He can do by classifying, categorizing, recognizing patterns, or chaining. The idea behind each of these connecting processes is to find all the "relatives" of that concept and make a "family tree" for the concept.
A first grader may be learning all about Thanksgiving. A larger concept that Thanksgiving belongs to could be holidays, and a larger concept that holidays belong to is celebrations. Other holidays may include Christmas, Hanukkah, and the Fourth of July. These are all celebrations. Some celebrations, such as weddings, birthdays and funerals, however, are not holidays. The larger concept of celebrations, then, includes celebrations that are holidays and celebrations that are not holidays.
A student needs to practice concept connection. When he is exposed to new information, he should look through his memory for things that seem related to the new information. If a student is discussing what is going on in Kosovo, for example, he might ask himself what the Civil War, the Holocaust, and Bosnia have in common with Gaza.
Bernice McCarthy, a well-known educator, summed it up like this: "Learning is the making of meaning. Meaning is making connections. Connections are the concepts." McCarthy is saying that in order to learn something, we must understand its meaning. We make meaning by connecting new ideas to ones we already have. The links or chains with which we connect new ideas or information to ones we already know are their common concepts.
Schema is a pattern or arrangement of knowledge that a person already has stored in his brain that helps him understand new information. A student may have a definite image in his mind of what a reptile looks like from information he has learned about reptiles from pictures that he has been shown, by what he has read and by what he has been told. When he encounters a creature that he has never seen before, and the creature has all of the qualities that he has stored in his brain about reptiles, then he can infer or draw the conclusion that it probably is a reptile.
Some schemas are also linked to rules and predictable patterns that we have learned. Students can develop schemata for the tests a certain teacher gives, because she always gives the same type of test. This helps a student to know how to study for the test because he knows the kinds of questions the teacher is going to ask. A schema does not always follow a pattern or a rule, however, due to exceptions or irregularities. For example, students may think that they have mastered a spelling or grammar rule only to have the teacher give an exception to the rule. On the whole, however, using a schema or pattern is a way to make helpful predictions.
Metaphors, similes, and analogies
Metaphors, similes and analogies are ways to explain the abstract or unfamiliar by showing how the abstract/unfamiliar phenomena shares characteristics with or compares to a familiar object, idea or concept. Metaphors, similes and analogies may also result in the creation of an image in the mind's eye. The ability to create similes, metaphors and analogies is a greater skill than understanding those created by others. A correctly formed metaphor, simile or analogy indicates that the person understands the subject matter so well that he can make another representation of it. This represents concept connection at higher levels. The capacity to reason using metaphors, similes and analogies is related to the ability to draw inferences from what is read or discussed.
Not all thinking is done in words. Sometimes a person may form visual images or pictures in her mind that are equally as meaningful as, or more meaningful than, words. When many of us are asked to give directions to a person, we are able to see a map or visual in our minds that helps us to give these directions. When you read a really good novel, do you visualize what the setting and the characters look like? Are you running your own movie camera? When you are asked the difference between a square and a trapezoid, do you see in your mind what each of these figures looks like? If you can do these things, then you have the ability to use visual imagery. Visualization is especially helpful to students in subjects such as literature, geography, biology, and math.
To infer is to draw a conclusion — to conclude or surmise from presenting evidence. An inference is the conclusion drawn from a set of facts or circumstances. If a person infers that something has happened, he does not see, hear, feel, smell, or taste the actual event. But from what he knows, it makes sense to think that it has happened. Sometimes inferring is described as "reading between the lines." Authors often give clues that are not directly spelled out. When a reader uses the clues to gain a deeper understanding of what he is reading, he is inferring. Assessments of the ability to make inferences about written text are used to measure reading skill or listening skill.
Inferring is sometimes confused with implying. An author or speaker implies while the reader or listener infers. When we say that written text or a speaker implies something, we mean that something is conveyed or suggested without being stated outright. For example, when the governor said he would not rule out a tax increase, he implied that he might find it necessary to advocate raising some taxes. Inference, on the other hand, is a thought process performed by a reader or listener to draw conclusions. When the governor said he would not rule out a tax increase, the listener or reader may infer that the governor had been given new information since he had until now been in favor of tax reductions.
Not a day goes by that a person doesn't have to solve problems. From the moment a person gets up in the morning and decides what to eat for breakfast, what to wear to work or to school, or how to explain to the teacher why he didn't get his homework done or to his boss why his monthly report isn't finished, he is solving problems. Problems can affect many aspects of our lives, including social, personal, health, and, of course, school.
Being able to problem solve in school is extremely important. What to write for an essay, how to solve a problem in math, choosing the correct materials for a science experiment, or even deciding who to sit next to at lunch can all be significant problems that a student must solve. How a student goes about solving his problems is important in terms of how successful the results will be. Problems need to be worked through systematically and logically in order to come to a satisfactory conclusion.
When problem solving, it is important to remember the steps needed to be taken. First, the problem needs to be defined and given definite limitations by drawing a mental box around it.
Being creative, considering several strategies, and trying out multiple strategies as a means toward reaching the solution is part of being a good problem solver. It is important in problem solving to remember that mistakes are learning opportunities because a person learns what doesn't work. In scientific research, the goal is as often to prove a theory wrong as it is to prove a theory right. Thomas Edison was asked once how he kept from getting discouraged when he had made so many mistakes before he perfected his idea of the light bulb. He had tried over 2,000 ways before one worked. Edison responded that he had not made 2,000 mistakes, but rather that he had over 2,000 learning experiences that moved him closer to the answer.
How often have students heard the teacher say, "Let's hear your ideas about this," or "I need to have some more ideas about how this will work?" Coming up with original ideas is very important in higher order thinking. But what are ideas and where do they come from?
Some ideas come from insight — a spontaneous cohesion of several thoughts. An insight is like a light bulb turning on in a person's head. Insights are great thoughts that help a person to see or understand something, quite often something that he has not been able to figure out before. For example, a student may be having trouble getting all of his homework done every night. Usually this student leaves his math homework until last because he doesn't like math and math is hard for him. Suddenly, he considers that if he does his hardest subject first, the rest of the homework won't seem so bad, and he might actually finish it all. This student just had an insightful idea about how to solve his homework problem.
Some ideas are called original ideas. These are thoughts that a person has made up himself and has not copied from someone else. Many teachers look for students who can come up with ideas that no other students have had. To have original ideas, a person has to use his creative imagination.
One way to generate original ideas or to create a new method of doing things is by brainstorming. Brainstorming can be done individually or in groups, although we usually do this best in groups. It has been said that the best way to have a good idea is to have a lot of ideas. In order to have a lot of ideas, we need to brainstorm. When brainstorming, the goal is to generate as many ideas as possible, regardless of the feasibility of the idea.
If students brainstorm in a group, they can build on each other's ideas. One student's suggestion may give another student a terrific idea that he would not have thought of without the other student's idea. Group members can "hitchhike" on each other's ideas, and modify each other's ideas in order to make new ideas. Becoming good at brainstorming has a practical application to adult life as well as being useful in school. Many new products, such as the iron that turns itself off, were developed by adults through brainstorming.
Another way to form ideas is to use critical thinking. This involves a person using his own knowledge or point of view to decide what is right or wrong about someone else's ideas. This is sometimes called "having a mind of your own." It means that a person doesn't have to believe or accept everything that someone else says or writes. For example, a friend decides that Babe Ruth is the best baseball player who ever lived. But another friend may feel that Mark McGuire deserves that title, and he may have lots of facts to support his position.
In addition to evaluating other people's ideas, critical thinking can also be used to evaluate things. A person does this when he is deciding which new telephone or book to buy. Of course, critical thinking can sometimes be carried too far. Nobody likes the person who argues about everything and only feels his point of view is right. If used reasonably, however, critical thinking can help a student be successful in school and elsewhere.
Creativity can be measured by its fluency, flexibility, originality, and elaboration. The most creative minds are those for whom creative thought is fluid. The most creative thinkers are also flexible within their creating — they are willing and able to manipulate their thinking to improve upon that which they are creating. Creative thinkers are able to elaborate on their creation, largely because it is their creation and not one that has been borrowed. When creative thinkers are at the peak of their creative process, they may enter a state of concentration so focused that they are totally absorbed in the activity at hand. They may be in effortless control and at the peak of their abilities. Psychologist Mihaly Csikszentmihalyi refers to this fluid and elaborative state of mind as "flow." Finally, creative thinkers are original; they do not "copy" the thinking of others but rather build their thinking from the ground up.
Creativity is usually thought of as divergent thinking — the ability to spin off one's thinking in many directions. But creative thinking is also convergent, for when someone has created something, his thinking may converge only on ideas and information that pertain to that particular invention.
Robert Sternberg, a well-known professor of psychology and education at Yale University, says that successful people use three kinds of intelligence: analytical, creative, and practical. A successful person, according to Sternberg, uses all three.
Analytical intelligence uses critical thinking. The analytical student most often gets high grades and high test scores in traditional school. The analytical student likes school and is liked by her teachers. A person with analytical intelligence is good at analyzing material. Analytical thinking includes judging, evaluating, comparing, contrasting, critiquing, explaining why, and examining.
When students are given three choices for a project in science, they analyze each in their own way and then make their choices. In literature class, students critique a poem. In math class, they solve word problems. In history class, students compare and contrast the causes of World War I and World War II. And after school at football practice, the football coach and the team analyze their upcoming opponents each week.
Analytical thinking is also used to evaluate things. A person does this when he uses critical thinking to decide which computer or skateboard to buy. He also does this when he decides which movie to go to or which TV program to watch.
Creative thinkers are original thinkers who see things differently. Creative thinkers often feel confined by school because they are asked to do things in an uncreative way. They may often get average grades in a traditional school, ask questions that may seem odd or unusual, and are sometimes viewed by their teachers as a "pain" because they want to do things their way.
Creative thinking involves creating, discovering, imagining, supposing, designing, "what if-ing," inventing and producing. Forming creative ideas means coming up with an unusual, novel, or surprising solution to a problem. People who have creative ideas are able to apply problem-solving skills in a new situation. They see relationships others just don't see until they are pointed out. Inventors such as Thomas Edison took the information they had and regrouped it until something new happened. Creative thinking has novelty, flexibility and originality.
Have you ever seen an advertisement for something new on TV and thought to yourself, "Now, why didn't I think of that?" The person who thought of the product being advertised is now making millions because he connected ideas that had never been connected. He also solved a problem common to many people, and now many people are buying his product.
The invention of Velcro is a good example. The inventor of Velcro got his idea from a cock-a-bur that stuck on his pants when he walked in the woods. When he looked closely at the cock-a-bur on his pants, he saw that one "side" had lots of points (the cock-a-bur) and the other "side" was made of lots of round loops (the pants material). He also noticed how firmly the cock-a-bur was stuck to his pants. He decided that pointed and looped surfaces could be a good way to join two items. Thus, Velcro was born.
Being creative isn't just about inventing. It's also about solving unexpected problems that come up every day. For example, the Apollo 13 mission had a problem with the air filter in the lunar module. The filter in the lunar module needed to be replaced with the one from the command module, but the two filters had differently shaped fittings that could not be interchanged. The ground crew brainstormed and figured out a way to make the new filter fit into the old hole by using plastic baggies, duct tape, and a sock, and creatively solved the problem with the materials at hand.
Solutions to the world's problems will never be found in textbooks. They reside in the minds of creative, inventive people. So it is important for all students to exercise their creative "muscles."
People with good practical intelligence are said to have good common sense. They may not make the best grades in traditional school, but they know how to use knowledge, how to adapt it to different situations, and often how to get along with others. Practical thinkers can take knowledge and apply it to real life situations. Practical thinking involves practicing, demonstrating, using, applying and implementing information.
For example, in science class, students may tell all the ways reptiles are useful to people. In math class, students may develop a monthly food budget for a family of four based on actual food costs at the local grocery. In history class, students may explain how a certain law has affected their lives, and how their lives might be different if that law did not exist. In literature class, they may tell what general lesson can be learned from Tom Sawyer's way of persuading his friends to whitewash Aunt Polly's fence, and they give examples of how that method is used in today's advertising. All of these are examples of how to use practical intelligence.
So which type of thinking — analytical, creative and practical — is best or most useful? There is no one, best way to be smart or to think. All three kinds of thinking are useful and interrelated, and all three contribute equally toward successful intelligence. Analytical thinking is good for analyzing and information. Creative thinking allows us to come up with novel solutions and original ideas. Practical thinking helps us adapt to our environments and use common sense in real life. The Velcro inventor first used creative intelligence to transform the relationship of cock-a-burs and his pants into a broader concept. He used practical intelligence to realize the many applications for his creative invention. He also used analytical intelligence to examine each of those potential applications and then decide which applications he would pursue first. Although many of us are stronger in one of the three intelligences than the other two, more success is achieved when we learn to balance and use all three.
Metacognition means thinking about thinking. There are two basic parts to metacognition: thinking about your thinking and knowing about knowing. Everyone needs to understand the way he or she thinks.
A person needs to know his mental strengths and weaknesses. Is he good at solving problems, understanding concepts, and/or following directions? Is he more analytical, creative or practical in your thinking? Does he learn best by listening, seeing, doing, or by using a combination of all three? Which memory techniques work best for him?
The second part of metacognition is monitoring and regulating how he thinks and learns. It is deciding how to best accomplish a task by using strategies and skills effectively. For example, how would he best learn new spelling words? By writing them out several times? By spelling them out loud a number of times? Or by spelling them out loud while he writes them a few times?
Thinking about the way he understands things and monitoring your progress can help a person become a better learner and thinker. For example, a student who knows he is not good at remembering assignments realizes he should use a plan book. A student who knows he is not a fast reader realizes that he must give himself extra time to complete the assignment. Both of these students know their weak spots and are doing something to get around them.
Robert Sternberg defines successful intelligence as mental self-management. Mental self-management can be described as an expanded view of metacognition. According to Sternberg, mental self-management is composed of six steps:
- Know your strengths and weaknesses.
- Capitalize on your strengths and compensate for your weaknesses.
- Defy negative expectations.
- Believe in yourself. This is called self-efficacy.
- Seek out role models — people from whom you can learn.
- Seek out an environment where you can make a difference.
Teaching for wisdom
According to Sternberg, wisdom requires one to know what one knows and what one does not know, as well as what can be known and cannot be known. Further, Sternberg asserts that wise people look out not just for themselves, but for all to whom they have a responsibility. He further asserts that teachers should actively teach their students ways of thinking that will lead them to become wise.
Some common challenges
Problems that students may have with understanding concepts include:
- A shaky grasp of the concept; understanding of a concept is shallow or narrow
- Relying on rote memory too much
- Poor concept comprehension monitoring
- Problems with verbal concepts
- Problems with nonverbal concepts
- Problems with process concepts
- Concept problems that are specific to a certain subject (math, science, literature, etc.)
- Poor abstract conceptualization
- Trouble making inferences
Problems that students may have with problem solving include:
- Problem identification — knowing a problem when you see one, and stating the whole problem
- Process selection — choosing the best process for solving the problem
- Representing the information clearly — stating the information in a clear way
- Strategy formation — forming a good strategy for solving the problem
- Allocation of resources — spending your resources of time and energy wisely
- Solution monitoring — checking to see if the solution is coming out right
- Evaluating solutions — evaluating which solution or solutions are best
Click the "References" link above to hide these references.
Bell, N. (1991). Visualizing and verbalizing for language comprehension and thinking. Pas Robles, CA: Academy of Reading Publications.
Bennett, B. & Rolheiser, C. (2001). Beyond Monet: The artful science of instructional integration. Toronto: Bookation. This book is recommended for teachers. Chapters 8 and 9 focus on concept formation and concept attainment.
Berninger, V. W. & Richards, T. L. (2002). Brain literacy for educators and psychologists. San Diego, CA: Academic Press.
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: HarperPerennial.
Glover, J. A., & Bruning, R. H. (1987). Educational psychology: Principles and applications. Boston and Toronto: Little, Brown and Company.
Inspiration® is a software learning tool that assists students of varying ages in developing ideas and organizing thinking. Inspiration's integrated diagramming and outlining environments work together to help students comprehend concepts and information. It is available at www.cdl.org
Hyerle, D. (1996). Visual tools for constructing knowledge. Alexandria, VA: Association for Supervision and Curriculum Development.
Levine, M. D. (2002). Educational care (Second Edition). Cambridge, MA: Educator's Publishing Service.
McKeown, M., Hamilton, M., Kucan, L. & Beck, I. (1997). Questioning the author: An Approach for enhancing student engagement with text. Newark, DE: International Reading Association.
Perkins, D. (1995). Outsmarting IQ: The emerging science of learnable intelligence. New York: The Free Press.
Sternberg, R. J. (2007). Wisdom, intelligence and creativity synthesized. New York, NY: Cambridge University Press.
Sternberg, R. J. & Spear-Swerling, L. (1996). Teaching for thinking. Washington, D.C.: American Psychological Association.
Sternberg, R. J. (1996). Successful intelligence. New York: Simon & Schuster.
Sternberg, R. J. & Grigorenko, E. L. (2007). Teaching for successful intelligence. Thousand Oaks, CA: Sage Publications.
Sternberg, R.J. and Lubart, T.I. (2000). Defying the crowd: Cultivating creativity in a culture of conformity. New York: The Free Press.
Thomas, A., Thorne, G., Small, R., DeSanti, P. & Lawson, C. (1998). MindWorks and how mine works. Covington, LA: Center for Development and Learning.
Thomas, A., ed. (1997). Plain Talk about k.i.d.s. Cambridge, MA: Educator's Publishing Service.
Thomas, A., ed. (2004). PlainTalk about kids. Covington, LA: Learning Success Press.
Thomas, A., and Thorne, G. (2009). How To Increase Higher Order Thinking. Metarie, LA: Center for Development and Learning. Retrieved Dec. 7, 2009, from http://www.cdl.org/resource-library/articles/HOT.php?type=subject&id=18
Comments and Recommendations | http://www.adlit.org/articles/34651/ | 13 |
19 | Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 231
Adding + It Up: Helping Children Learn Mathematics 7 DEVELOPING PROFICIENCY WITH OTHER NUMBERS In this chapter, we look beyond the whole numbers at other numbers that are included in school mathematics in grades pre-K to 8, particularly in the upper grades. We first look at the rational numbers, which constitute what is undoubtedly the most challenging number system of elementary and middle school mathematics. Then we consider proportional reasoning, which builds on the ratio use of rational numbers. Finally, we examine the integers, a stepping stone to algebra. Rational Numbers Learning about rational numbers is more complicated and difficult than learning about whole numbers. Rational numbers are more complex than whole numbers, in part because they are represented in several ways (e.g., common fractions and decimal fractions) and used in many ways (e.g., as parts of regions and sets, as ratios, as quotients). There are numerous properties for students to learn, including the significant fact that the two numbers that compose a common fraction (numerator and denominator) are related through multiplication and division, not addition.1 This feature often causes misunderstanding when students first encounter rational numbers. Further, students are likely to have less out-of-school experience with rational numbers than with whole numbers. The result is a number system that presents great challenges to students and teachers. Moreover, how students become proficient with rational numbers is not as well understood as with whole numbers. Significant work has been done, however, on the teaching and learning of rational numbers, and several points
OCR for page 232
Adding + It Up: Helping Children Learn Mathematics can be made about developing proficiency with them. First, students do have informal notions of sharing, partitioning sets, and measuring on which instruction can build. Second, in conventional instructional programs, the proficiency with rational numbers that many students develop is uneven across the five strands, and the strands are often disconnected from each other. Third, developing proficiency with rational numbers depends on well-designed classroom instruction that allows extended periods of time for students to construct and sustain close connections among the strands. We discuss each of these points below. Then we examine how students learn to represent and operate with rational numbers. Using Informal Knowledge Students’ informal notions of partitioning, sharing, and measuring provide a starting point for developing the concept of rational number.2 Young children appreciate the idea of “fair shares,” and they can use that understanding to partition quantities into equal parts. Their experience in sharing equal amounts can provide an entrance into the study of rational numbers. In some ways, sharing can play the role for rational numbers that counting does for whole numbers. In some ways, sharing can play the role for rational numbers that counting does for whole numbers. In view of the preschooler’s attention to counting and number that we noted in chapter 5, it is not surprising that initially many children are concerned more that each person gets an equal number of things than with the size of each thing.3 As they move through the early grades of school, they become more sensitive to the size of the parts as well.4 Soon after entering school, many students can partition quantities into equal shares corresponding to halves, fourths, and eighths. These fractions can be generated by successively partitioning by half, which is an especially fruitful procedure since one half can play a useful role in learning about other fractions.5 Accompanying their actions of partitioning in half, many students develop the language of “one half” to describe the actions. Not long after, many can partition quantities into thirds or fifths in order to share quantities fairly among three or five people. An informal understanding of rational number, which is built mostly on the notion of sharing, is a good starting point for instruction. The notion of sharing quantities and comparing sizes of shares can provide an entry point that takes students into the world of rational numbers.6 Equal shares, for example, opens the concept of equivalent fractions (e.g., If there are 6 chil-
OCR for page 233
Adding + It Up: Helping Children Learn Mathematics dren sharing 4 pizzas, how many pizzas would be needed for 12 children to receive the same amount?). It is likely, however, that an informal understanding of rational numbers is less robust and widespread than the corresponding informal understanding of whole numbers. For whole numbers, many young children enter school with sufficient proficiency to invent their own procedures for adding, subtracting, multiplying, and dividing. For rational numbers, in contrast, teachers need to play a more active and direct role in providing relevant experiences to enhance students’ informal understanding and in helping them elaborate their informal understanding into a more formal network of concepts and procedures. The evidence suggests that carefully designed instructional programs can serve both of these functions quite well, laying the foundation for further progress.7 Discontinuities in Proficiency Proficiency with rational numbers, as with all mathematical topics, is signaled most clearly by the close intertwining of the five strands. Large-scale surveys of U.S. students’ knowledge of rational number indicate that many students are developing some proficiency within individual strands.8 Often, however, these strands are not connected. Furthermore, the knowledge students acquire within strands is also disconnected. A considerable body of research describes this separation of knowledge.9 As we said at the beginning of the chapter, rational numbers can be expressed in various forms (e.g., common fractions, decimal fractions, percents), and each form has many common uses in daily life (e.g., a part of a region, a part of a set, a quotient, a rate, a ratio).10 One way of describing this complexity is to observe that, from the student’s point of view, a rational number is not a single entity but has multiple personalities. The scheme that has guided research on rational number over the past two decades11 identifies the following interpretations for any rational number, say : (a) a part-whole relation (3 out of 4 equal-sized shares); (b) a quotient (3 divided by 4); (c) a measure ( of the way from the beginning of the unit to the end); (d) a ratio (3 red cars for every 4 green cars); and (e) an operation that enlarges or reduces the size of something ( of 12). The task for students is to recognize these distinctions and, at the same time, to construct relations among them that generate a coherent concept of rational number.12 Clearly, this process is lengthy and multifaceted.
OCR for page 234
Adding + It Up: Helping Children Learn Mathematics Instructional practices that tend toward premature abstraction and extensive symbolic manipulation lead students to have severe difficulty in representing rational numbers with standard written symbols and using the symbols appropriately.13 This outcome is not surprising, because a single rational number can be represented with many different written symbols (e.g., 0.6, 0.60, 60%). Instructional programs have often treated this complexity as simply a “syntactic” translation problem: One written symbol had to be translated into another according to a sequence of rules. Different rules have often been taught for each translation situation. For example, “To change a common fraction to a decimal fraction, divide the numerator by the denominator.” But the symbolic representation of rational numbers poses a “semantic” problem—a problem of meaning—as well. Each symbol representation means something. Current instruction often gives insufficient attention to developing the meanings of different rational number representations and the connections among them. The evidence for this neglect is that a majority of U.S. students have learned rules for translating between forms but understand very little about what quantities the symbols represent and consequently make frequent and nonsensical errors.14 This is a clear example of the lack of proficiency that results from pushing ahead within one strand but failing to connect what is being learned with other strands. Rules for manipulating symbols are being memorized, but students are not connecting those rules to their conceptual understanding, nor are they reasoning about the rules. Another example of disconnection among the strands of proficiency is students’ tendency to compute with written symbols in a mechanical way without considering what the symbols mean. Two simple examples illustrate the point. First, recall (from chapter 4) the result from the National Assessment of Educational Progress (NAEP)15 showing that more than half of U.S. eighth graders chose 19 or 21 as the best estimate of These choices do not make sense if students understand what the symbols mean and are reasoning about the quantities represented by the symbols. Another survey of students’ performance showed that the most common error for the addition problem 4+.3=? is .7, which is given by 68% of sixth graders and 51% of fifth and seventh graders.16 Again, the errors show that many students have learned rules for manipulating symbols without understanding what those symbols mean or why the rules work. Many students are unable to reason appropriately about symbols for rational numbers and do not have the strategic competence that would allow them to catch their mistakes.
OCR for page 235
Adding + It Up: Helping Children Learn Mathematics Supporting Connections Of all the ways in which rational numbers can be interpreted and used, the most basic is the simplest—rational numbers are numbers. That fact is so fundamental that it is easily overlooked. A rational number like is a single entity just as the number 5 is a single entity. Each rational number holds a unique place (or is a unique length) on the number line (see chapter 3). As a result, the entire set of rational numbers can be ordered by size, just as the whole numbers can. This ordering is possible even though between any two rational numbers there are infinitely many rational numbers, in drastic contrast to the whole numbers. It may be surprising that, for most students, to think of a rational number as a number—as an individual entity or a single point on a number line—is a novel idea.17 Students are more familiar with rational numbers in contexts like parts of a pizza or ratios of hits to at-bats in baseball. These everyday interpretations, although helpful for building knowledge of some aspects of rational number, are an inadequate foundation for building proficiency. The difficulty is not just due to children’s limited experience. Even the interpretations ordinarily given by adults to various forms of rational numbers, such as percent, do not lead easily to the conclusion that rational numbers are numbers.18 Further, the way common fractions are written (e.g., ) does not help students see a rational number as a distinct number. After all, looks just like one whole number over another, and many students initially think of it as two different numbers, a 3 and a 4. Research has verified what many teachers have observed, that students continue to use properties they learned from operating with whole numbers even though many whole number properties do not apply to rational numbers. With common fractions,19 for example, students may reason that is larger than because 8 is larger than 7. Or they may believe that equals because in both fractions the difference between numerator and denominator is 1. With decimal fractions,20 students may say .25 is larger than .7 because 25 is larger than 7. Such inappropriate extensions of whole number relationships, many based on addition, can be a continuing source of trouble when students are learning to work with fractions and their multiplicative relationships.21 The task for instruction is to use, rather than to ignore, the informal knowledge of rational numbers that students bring with them and to provide them with appropriate experiences and sufficient time to develop meaning for these new numbers and meaningful ways of operating with them. Systematic errors can best be regarded as useful diagnostic tools for instruction since they more
OCR for page 236
Adding + It Up: Helping Children Learn Mathematics often represent incomplete rather than incorrect knowledge.22 From the current research base, we can make several observations about the kinds of learning opportunities that instruction must provide students if they are to develop proficiency with rational numbers. These observations address both representing rational numbers and computing with them. Representing Rational Numbers As with whole numbers, the written notations and spoken words used for decimal and common fractions contribute to—or at least do not help correct— the many kinds of errors students make with them. Both decimals and common fractions use whole numbers in their notations. Nothing in the notation or the words used conveys their meaning as fractured parts. The English words used for fractions are the same words used to tell order in a line: fifth in line and three fifths (for ). In contrast, in Chinese, is read “out of 5 parts (take) 3.” Providing students with many experiences in partitioning quantities into equal parts using concrete models, pictures, and meaningful contexts can help them create meaning for fraction notations. Introducing the standard notation for common fractions and decimals must be done with care, ensuring that students are able to connect the meanings already developed for the numbers with the symbols that represent them. Research does not prescribe a one best set of learning activities or one best instructional method for rational numbers. But some sequences of activities do seem to be more effective than others for helping students develop a conceptual understanding of symbolic representations and connect it with the other strands of proficiency.23 The sequences that have been shown to promote mathematical proficiency differ from each other in a number of ways, but they share some similarities. All of them spend time at the outset helping students develop meaning for the different forms of representation. Typically, students work with multiple physical models for rational numbers as well as with other supports such as pictures, realistic contexts, and verbal descriptions. Time is spent helping students connect these supports with the written symbols for rational numbers. In one such instructional sequence, fourth graders received 20 lessons introducing them to rational numbers.24 Almost all the lessons focused on helping the students connect the various representations of rational number with concepts of rational number that they were developing. Unique to this program was the sequence in which the forms were introduced: percents, then decimal fractions, and then common fractions. Because many children
OCR for page 237
Adding + It Up: Helping Children Learn Mathematics in the fourth grade have considerable informal knowledge of percents, percents were used as the starting point. Students were asked to judge, for example, the relative fullness of a beaker (e.g., 75%), and the relative height of a tube of liquid (e.g., 30%). After a variety of similar activities, the percent representations were used to introduce the decimal fractions and, later, the common fractions. Compared with students in a conventional program, who spent less time developing meaning for the representations and more time practicing computation, students in the experimental program demonstrated higher levels of adaptive reasoning, conceptual understanding, and strategic competence, with no loss of computational skill. This finding illustrates one of our major themes: Progress can be made along all strands if they remain connected. Another common feature of learning activities that help students understand and use the standard written symbols is the careful attention the activities devote to the concept of unit.25 Many conventional curricula introduce rational numbers as common fractions that stand for part of a whole, but little attention is given to the whole from which the rational number extracts its meaning. For example, many students first see a fraction as, say, of a pizza. In this interpretation the amount of pizza is determined by the fractional part and by the size of the pizza. Hence, three fourths of a medium pizza is not the same amount of pizza as three fourths of a large pizza, although it may be the same number of pieces. Lack of attention to the nature of the unit or whole may explain many of the misconceptions that students exhibit. A sequence of learning activities that focus directly on the whole unit in representing rational numbers comes from an experimental curriculum in Russia.26 In this sequence, rational numbers are introduced in the early grades as ratios of quantities to the unit of measure. For example, a piece of string is measured by a small piece of tape and found to be equivalent to five copies of the tape. Children express the result as “string/tape=5.” Rational numbers appear quite naturally when the quantity is not measured by the unit an exact number of times. The leftover part is then represented, first informally and then as a fraction of the unit. With this approach, the size of the unit always is in the foreground. The evidence suggests that students who engage in these experiences develop coherent meanings for common fractions, meanings that allow them to reason sensibly about fractions.27
OCR for page 238
Adding + It Up: Helping Children Learn Mathematics Computing with Rational Numbers As with representing rational numbers, many students need instructional support to operate appropriately with rational numbers. Adding, subtracting, multiplying, and dividing rational numbers require that they be seen as numbers because in elementary school these operations are defined only for numbers. That is, the principles on which computation is based make sense only if common fractions and decimal fractions are understood as representing numbers. Students may think of a fraction as part of a pizza or as a batting average, but such interpretations are not enough for them to understand what is happening when computations are carried out. The trouble is that many students have not developed a meaning for the symbols before they are asked to compute with rational numbers. Proficiency in computing with rational numbers requires operating with at least two different representations: common fractions and finite decimal fractions. There are important conceptual similarities between the rules for computing with both of these forms (e.g., combine those terms measured with the same unit when adding and subtracting). However, students must learn how those conceptual similarities play out in each of the written symbol systems. Procedural fluency for arithmetic with rational numbers thus requires that students understand the meaning of the written symbols for both common fractions and finite decimal fractions. What can be learned from students’ errors? Research reveals the kinds of errors that students are likely to make as they begin computing with common fractions and finite decimals. Whether the errors are the consequence of impoverished learning of whole numbers or insufficiently developed meaning for rational numbers, effective instruction with rational numbers needs to take these common errors into account. Some of the errors occur when students apply to fractions poorly understood rules for calculating with whole numbers. For example, they learn to “line up the numbers on the right” when they are adding and subtracting whole numbers. Later, they may try to apply this rule to decimal fractions, probably because they did not understand why the rule worked in the first place and because decimal fractions look a lot like whole numbers. This confusion leads many students to get .61 when adding 1.5 and .46, for example.28 It is worth pursuing the above example a bit further. Notice that the rule “line up the numbers on the right” and the new rule for decimal fractions “line up the decimal points” are, on the surface, very different rules. They
OCR for page 239
Adding + It Up: Helping Children Learn Mathematics prescribe movements of digits in different-sounding ways. At a deeper level, however, they are exactly the same. Both versions of the rule result in aligning digits measured with the same unit—digits with the same place value (tens, ones, tenths, etc.). This deeper level of interpretation is, of course, the one that is more useful. When students know a rule only at a superficial level, they are working with symbols, rules, and procedures in a routine way, disconnected from strands such as adaptive reasoning and conceptual understanding. But when students see the deeper level of meaning for a procedure, they have connected the strands together. In fact, seeing the second level is a consequence of connecting the strands. This example illustrates once more why connecting the strands is the key to developing proficiency. A second example of a common error and one that also can be traced to previous experience with whole numbers is that “multiplying makes larger” and “dividing makes smaller.”29 These generalizations are not true for the full set of rational numbers. Multiplying by a rational number less than 1 means taking only a part of the quantity being multiplied, so the result is less than the original quantity (e.g., which is less than 12). Likewise, dividing by a rational number less than 1 produces a quantity larger than either quantity in the original problem (e.g.,). As with the addition and subtraction of rational numbers, there are important conceptual similarities between whole numbers and rational numbers when students learn to multiply and divide. These similarities are often revealed by probing the deeper meaning of the operations. In the division example above, notice that to find the answer to 6÷2=? and the same question can be asked: How many [2s or ] are in 6? The similarities are not apparent in the algorithms for manipulating the symbols. Therefore, if students are to connect what they are learning about rational numbers with what they already understand about whole numbers, they will need to do so through other kinds of activities. One helpful approach is to embed the calculation in a realistic problem. Students can then use the context to connect their previous work with whole numbers to the new situations with rational numbers. An example is the following problem: I have six cups of sugar. A recipe calls for of a cup of sugar. How many batches of the recipe can I make? Since the size of the parts is less than one whole, the number of batches will necessarily be larger than the six (there are nine in 6). Useful activities
OCR for page 240
Adding + It Up: Helping Children Learn Mathematics might include drawing pictures of the division calculation, describing solution methods, and explaining why the answer makes sense. Simply teaching the rule “invert and multiply” leads to the same sort of mechanical manipulation of symbols that results from just telling students to “line up the decimal points.” What can be learned from conventional and experimental instruction? Conventional instruction on rational number computation tends to be rule based.30 Classroom activities emphasize helping students become quick and accurate in executing written procedures by following rules. The activities often begin by stating a rule or algorithm (e.g., “to multiply two fractions, multiply the numerators and multiply the denominators”), showing how it works on several examples (sometimes just one), and asking students to practice it on many similar problems. Researchers express concern that this kind of learning can be “highly dependent on memory and subject to deterioration.”31 This “deterioration” results when symbol manipulation is emphasized to the relative exclusion of conceptual understanding and adaptive reasoning. Students learn that it is not important to understand why the procedure works but only to follow the prescribed steps to reach the correct answer. This approach breaks the incipient connections between the strands of proficiency, and, as the breaks increase, proficiency is thwarted. Conventional instruction on rational number computation tends to be rule based. A number of studies have documented the results of conventional instruction.32 One study, for example, found that only 45% of a random sample of 20 sixth graders interviewed could add fractions correctly.33 Equally disturbing was that fewer than 10% of them could explain how one adds fractions even though all had heard the rules for addition, had practiced the rules on many problems, and sometimes could execute the rules correctly. These results, according to the researchers, were representative of hundreds of interviews conducted with sixth, seventh, and ninth graders. The results point to the need for instructional materials that support teachers and students so that they can explain why a procedure works rather than treating it as a sequence of steps to be memorized. Many researchers who have studied what students know about operations with fractions or decimals recommend that instruction emphasize conceptual understanding from the beginning.34 More specifically, say these researchers, instruction should build on students’ intuitive understanding of fractions and use objects or contexts that help students make sense of the operations. The rationale for that approach is that students need to under
OCR for page 241
Adding + It Up: Helping Children Learn Mathematics stand the key ideas in order to have something to connect with procedural rules. For example, students need to understand why the sum of two fractions can be expressed as a single number only when the parts are of the same size. That understanding can lead them to see the need for constructing common denominators. One of the most challenging tasks confronting those who design learning environments for students (e.g., curriculum developers, teachers) is to help students learn efficient written algorithms for computing with fractions and decimals. The most efficient algorithms often do not parallel students’ informal knowledge or the meaning they create by drawing diagrams, manipulating objects, and so on. Several instructional programs have been devised that use problem situations and build on algorithms invented by students.35 Students in these programs were able to develop meaningful and reasonably efficient algorithms for operating with fractions, even when the formal algorithms were not presented.36 It is not yet clear, however, what sequence of activities can support students’ meaningful learning of the less transparent but more efficient formal algorithms, such as “invert and multiply” for dividing fractions. Although there is only limited research on instructional programs for developing proficiency with computations involving rational numbers, it seems clear that instruction focused solely on symbolic manipulation without understanding is ineffective for most students. It is necessary to correct that imbalance by paying more attention to conceptual understanding as well as the other strands of proficiency and by helping students connect them. Proportional Reasoning Proportions are statements that two ratios are equal. These statements play an important role in mathematics and are formally introduced in middle school. Understanding the underlying relationships in a proportional situation and working with these relationships has come to be called proportional reasoning.37 Considerable research has been conducted on the features of proportional reasoning and how students develop it.38 Proportional reasoning is based, first, on an understanding of ratio. A ratio expresses a mathematical relationship that involves multiplication, as in $2 for 3 balloons or of a dollar for one balloon. A proportion, then, is a relationship between relationships. For example, a proportion expresses the fact that $2 for 3 balloons is in the same relationship as $6 for 9 balloons Ratios are often changed to unit ratios by dividing. For example, the unit ratio dollars per balloon is obtained by “dividing” $2 by 3 balloons. The
OCR for page 244
Adding + It Up: Helping Children Learn Mathematics of proportional reasoning, can create difficulties for students. The aspects of proportional reasoning that must be developed can be supported through exploring proportional (and nonproportional) situations in a variety of problem contexts using concrete materials or situations in which students collect data, build tables, and determine the relationships between the number pairs (ratios) in the tables.50 When 187 seventh-grade students with different curricular experiences were presented with a sequence of realistic rate problems, the students in the reform curricula considerably outperformed a comparison group of students 53% versus 28% in providing correct answers with correct support work.51 These students were part of the field trials for a new middle school curriculum in which they were encouraged to develop their own procedures through collaborative problem-solving activities. The comparison students had more traditional, teacher-directed instructional experiences. Proportional reasoning is complex and clearly needs to be developed over several years.52 One simple implication from the research suggests that presenting the cross-multiplication algorithm before students understand proportions and can reason about them leads to the same kind of separation between the strands of proficiency that we described earlier for other topics. But more research is needed to identify the sequences of activities that are most helpful for moving from well-understood but less efficient procedures to those that are more efficient. Ratios and proportions, like fractions, decimals, and percents, are aspects of what have been called multiplicative structures.53 These are closely related ideas that no doubt develop together, although they are often treated as separate topics in the typical school curriculum. Reasoning about these ideas likely interacts, but it is not well understood how this interaction develops. Much more work needs to be done on helping students integrate their knowledge of these components of multiplicative structures. Integers The set of integers comprises the positive and negative whole numbers and zero or, expressed another way, the whole numbers and their inverses, often called their opposites (see Chapter 3). The set of integers, like the set of whole numbers, is a subset of the rational numbers. Compared with the research on whole numbers and even on noninteger rational numbers, there has been relatively little research on how students acquire an understanding of negative numbers and develop proficiency in operating with them.
OCR for page 245
Adding + It Up: Helping Children Learn Mathematics A half-century ago students did not encounter negative numbers until they took high school algebra. Since then, integers have been introduced in the middle grades and even in the elementary grades. Some educators have argued that integers are easier for students than fractions and decimals and therefore should be introduced first. This approach has been tried, but there is very little research on the long-term effects of this alternative sequencing of topics. Concept of Negative Numbers Even young children have intuitive or informal knowledge of nonpositive quantities prior to formal instruction.54 These notions often involve action-based concepts like those associated with temperature, game moves, or other spatial and quantitative situations. For example, in some games there are moves that result in points being lost, which can lead to scores below zero or “in the hole.” Various metaphors have been suggested as approaches for introducing negative numbers, including elevators, thermometers, debts and assets, losses and gains, hot air balloons, postman stories, pebbles in a bag, and directed arrows on a number line.55 Many of the physical metaphors for introducing integers have been criticized because they do not easily support students’ understanding of the operations on integers (other than addition).56 But some studies have demonstrated the value of using these metaphors, especially for introducing negative numbers.57 Students do appear to be capable of understanding negative numbers far earlier than was once thought. Although more research is needed on the metaphors and models that best support students’ conceptual understanding of negative numbers, there already is enough information to suggest that a variety of metaphors and models can be used effectively. Operations with integers Research on learning to add, subtract, multiply, and divide integers is limited. In the past, students often learned the “rules of signs” (e.g., the product of a positive and negative number is negative) without much understanding. In part, perhaps, because instruction has not found ways to make the learning meaningful, some secondary and college students still have difficulty working with negative numbers.58 Alternative approaches, using the models mentioned earlier, have been tried with various degrees of success.59 A complete set of appropriate learn-
OCR for page 246
Adding + It Up: Helping Children Learn Mathematics ing activities with integers has not been identified, but there are some promising elements that should be explored further. Students generally perform better on problems posed in the context of a story (debts and assets, scores and forfeits) or through movements on a number line than on the same problems presented solely as formal equations.60 This result suggests, as for other number domains, that stories and other conceptual structures such as a number line can be used effectively as the context in which students begin their work and develop meaning for the operations. Furthermore, there are some approaches that seem to minimize commonly reported errors.61 In general, approaches that use an appropriate model of integers and operations on integers, and that spend time developing these and linking them to the symbols, offer the most promise. Beyond Whole Numbers Although the research provides a less complete picture of students’ developing proficiency with rational numbers and integers than with whole numbers, several important points can be made. First, developing proficiency is a gradual and prolonged process. Many students acquire useful informal knowledge of fractions, decimals, ratios, percents, and integers through activities and experiences outside of school, but that knowledge needs to be made more explicit and extended through carefully designed instruction. Given current learning patterns, effective instruction must prepare for interferences arising from students’ superficial knowledge of whole numbers. The unevenness many students show in developing proficiency that we noted with whole numbers seems especially pronounced with rational numbers, where progress is made on different fronts at different rates. The challenge is to engage students throughout the middle grades in learning activities that support the integration of the strands of proficiency. A second observation is that doing just that—integrating the strands of proficiency—is an even greater challenge for rational numbers than for whole numbers. Currently, many students learn different aspects of rational numbers as separate and isolated pieces of knowledge. For example, they fail to see the relationships between decimals, fractions, and percents, on the one hand, and whole numbers, on the other, or between integers and whole numbers. Also, connections among the strands of proficiency are often not made. Numerous studies show that with common fractions and decimals, especially, conceptual understanding and computational procedures are not appropriately linked. Further, students can use their informal knowledge of propor-
OCR for page 247
Adding + It Up: Helping Children Learn Mathematics tionality or rational numbers strategically to solve problems but are unable to represent and solve the same problem formally. These discontinuities are of great concern because the research we have reviewed indicates that real progress along each strand and within any single topic is exceedingly difficult without building connections between them. A third issue concerns the level of procedural fluency that should be required for arithmetic with decimals and common fractions. Decimal fractions are crucial in science, in metric measurement, and in more advanced mathematics, so it is important for students to be computationally fluent—to understand how and why computational procedures work, including being able to judge the order-of-magnitude accuracy of calculator-produced answers. Some educators have argued that common fractions are no longer essential in school mathematics because digital electronics have transformed almost all numerical transactions into decimal fractions. Technological developments certainly have increased the importance of decimals, but common fractions are still important in daily life and in their own right as mathematical objects, and they play a central role in the development of more advanced mathematical ideas. For example, computing with common fractions sets the stage for computing with rational expressions in algebra. It is important, therefore, for students to develop sound meanings for common fractions and to be fluent with ordering fractions, finding equivalent fractions, and using unit rates. Students should also develop procedural fluency for computations with “manageable” fractions. However, the rapid execution of paper-and-pencil computation algorithms for less frequently used fractions (e.g., ) is unnecessary today. Finally, we cannot emphasize too strongly the simple fact that students need to be fully proficient with rational numbers and integers. This proficiency forms the basis for much of advanced mathematical thinking, as well as the understanding and interpretation of daily events. The level at which many U.S. students function with rational numbers and integers is unacceptable. The disconnections that many students exhibit among their conceptual understanding, procedural fluency, strategic competence, and adaptive reasoning pose serious barriers to their progress in learning and using mathematics. Evidence from experimental programs in the United States and from the performance of students in other countries suggests that U.S. middle school students are capable of learning more about rational numbers and integers, with deeper levels of understanding.
OCR for page 248
Adding + It Up: Helping Children Learn Mathematics Notes 1. See Harel and Confrey, 1994. Rational numbers, ratios, and proportions, which on the surface are about division, are called multiplicative concepts because any division problem can be rephrased as multiplication. See Chapter 3. 2. Behr, Lesh, Post, and Silver, 1983; Confrey, 1994, 1995; Empson, 1999; Kieren, 1992; Mack, 1990, 1995; Pothier and Sawada, 1983; Streefland, 1991, 1993. 3. Hiebert and Tonnessen, 1978; Pothier and Sawada, 1983. 4. Empson, 1999; Pothier and Sawada, 1983. 5. Confrey, 1994; Pothier and Sawada 1989. 6. Confrey, 1994; Streefland, 1991, 1993. 7. Cramer, Behr, Post, and Lesh, 1997; Empson, 1999; Mack, 1995; Morris, in press; Moss and Case, 1999; Streefland, 1991, 1993. 8. Kouba, Zawojewski, and Strutchens, 1997; Wearne and Kouba, 2000. 9. Behr, Lesh, Post, and Silver, 1983; Behr, Wachsmuth, Post, and Lesh, 1984; Bezuk and Bieck, 1993; Hiebert and Wearne, 1985; Mack, 1990, 1995; Post, Wachsmuth, Lesh, and Behr, 1985; Streefland, 1991, 1993. 10. Kieren, 1976. 11. Kieren, 1976, 1980, 1988. 12. Students not only should “construct relations among them” but should also eventually have some grasp of what is entailed in these relations—for example, that Interpretation D is a contextual instance of E—namely, you multiply the number of green cars by to get the number of red cars, while thinking of as three times (Interpretation A), and thinking of it as 3 divided by 4, is the equation which is basically the associative law for multiplication. 13. Behr, Wachsmuth, Post, and Lesh, 1984; Hiebert and Wearne, 1986. 14. Hiebert and Wearne, 1986; Resnick, Nesher, Leonard, Magone, Omanson, and Peled, 1989. 15. Carpenter, Corbitt, Kepner, Lindquist, and Reys, 1981. 16. Hiebert and Wearne, 1986. 17. Behr, Lesh, Post, and Silver, 1983. 18. Davis, 1988. 19. Behr, Wachsmuth, Post, and Lesh, 1984. 20. Resnick, Nesher, Leonard, Magone, Omanson, and Peled, 1989. 21. Behr, Wachsmuth, Post, and Lesh, 1984. 22. Resnick, Nesher, Leonard, Magone, Omanson, and Peled, 1989. 23. Cramer, Post, Henry, and Jeffers-Ruff, in press; Hiebert and Wearne, 1988; Hunting, 1983; Mack, 1990, 1995; Morris, in press; Moss and Case, 1999; Hiebert, Wearne, and Taber, 1991. 24. Moss and Case, 1999. 25. Behr, Harel, Post, and Lesh, 1992. 26. Davydov and Tsvetkovich, 1991; Morris, in press; Schmittau, 1993. 27. Morris, in press.
OCR for page 249
Adding + It Up: Helping Children Learn Mathematics 28. Hiebert and Wearne, 1986. 29. Bell, Fischbein, and Greer, 1984; Fischbein, Deri, Nello, and Marino, 1985. 30. Hiebert and Wearne, 1985. 31. Kieren, 1988, p. 178. 32. Mack, 1990; Peck and Jencks, 1981; Wearne and Kouba, 2000. 33. Peck and Jencks, 1981. 34. Behr, Lest, Post, and Silver, 1983; Bezuk and Bieck, 1993; Bezuk and Cramer, 1989; Hiebert and Wearne, 1986; Kieren, 1988; Mack, 1990; Peck and Jencks, 1981; Streefland, 1991, 1993. 35. Cramer, Behr, Post, and Lesh, 1997; Huinker, 1998; Lappan, Fey, Fitzgerald, Friel, and Phillips, 1996; Streefland, 1991. 36. Huinker, 1998; Lappan and Bouck, 1998. 37. Lesh, Post, and Behr, 1988. 38. Tourniaire and Pulos, 1985. 39. Behr, Harel, Post, and Lesh, 1992; Cramer, Behr, and Bezuk, 1989. 40. Post, Behr, and Lesh, 1988. 41. Lesh, Post, and Behr, 1988. 42. Wearne and Kouba, 2000. 43. Ahl, Moore, and Dixon, 1992; Dixon and Moore, 1996. 44. Lamon, 1993, 1995. 45. Lamon, 1993. 46. Lamon, 1995. 47. The term composite unit refers to thinking of 3 balloons (and hence $2) as a single entity. The related term compound unit is used in science to refer to units such as “miles/hour,” or in this case “dollars per balloon.” 48. Lamon, 1993, 1994. 49. Heller, Ahlgren, Post, Behr, and Lesh, 1989; Langrall and Swafford, 2000. 50. Cramer, Post, and Currier, 1993; Kaput and West, 1994. 51. Ben-Chaim, Fey, Fitzgerald, Benedetto, and Miller, 1998; Heller, Ahlgren, Post, Behr, and Lesh, 1989. 52. Behr, Harel, Post, and Lesh, 1992; Karplus, Pulas, and Stage, 1983. 53. Vergnaud, 1983. 54. Hativa and Cohen, 1995. 55. English, 1997. See also Crowley and Dunn, 1985. 56. Fischbein, 1987, ch. 8. 57. Duncan and Sanders, 1980; Moreno and Mayer, 1999; Thompson, 1988. 58. Bruno, Espinel, Martinon, 1997; Kuchemann, 1980. 59. Arcavi and Bruckheimer, 1981; Carson and Day, 1995; Davis, 1990; Liebeck, 1990; Human and Murray, 1987. 60. Moreno and Mayer, 1999; Mukhopadhyay, Resnick, and Schauble, 1990. 61. Duncan and Saunders, 1980; Thompson, 1988; Thompson and Dreyfus, 1988.
OCR for page 250
Adding + It Up: Helping Children Learn Mathematics References Ahl, V.A., Moore, C.F., & Dixon, J.A. (1992). Development of intuitive and numerical proportional reasoning. Cognitive Development, 7, 81–108. Arcavi, A., & Bruckheimer, M. (1981). How shall we teach the multiplication of negative numbers? Mathematics in School, 10, 31–33. Behr, M., Harel, G., Post, T., & Lesh, R. (1992). Rational number, ratio, and proportion. In D.Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 296– 333). New York: Macmillan. Behr, M.J., Lesh, R., Post, T.R., & Silver, E.A. (1983). Rational number concepts. In R. Lesh & M.Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 91– 126). New York: Academic Press. Behr, M.J., Wachsmuth, I., Post, T.R., & Lesh, R. (1984). Order and equivalence of rational numbers: A clinical teaching experiment. Journal for Research in Mathematics Education, 15, 323–341. Bell, A.W., Fischbein, E., & Greer, B. (1984). Choice of operation in verbal arithmetic problems: The effects of number size, problem structure and content. Educational Studies in Mathematics, 15, 129–147. Ben-Chaim, D., Fey, J.T., Fitzgerald, W.M., Benedetto, C., & Miller, J. (1998). Proportional reasoning among 7th grade students with different curricular experiences. Educational Studies in Mathematics, 36, 247–273. Bezuk, N.D., & Bieck, M. (1993). Current research on rational numbers and common fractions: Summary and implications for teachers. In D.T.Owens (Ed.), Research ideas for the classroom: Middle grades mathematics (pp. 118–136). New York: Macmillan. Bezuk, N., & Cramer, K. (1989). Teaching about fractions: What, when, and how? In P. Trafton (Ed.), New directions for elementary school mathematics (1989 Yearbook of the National Council of Teachers of Mathematics, pp. 156–167). Reston VA: NCTM. Bruno, A., Espinel, M.C., Martinon, A. (1997). Prospective teachers solve additive problems with negative numbers. Focus on Learning Problems in Mathematics, 19, 36–55. Carpenter, T.P., Corbitt, M.K., Kepner, H.S., Jr., Lindquist, M.M., & Reys, R.E. (1981). Results from the second mathematics assessment of the National Assessment of Educational Progress. Reston, VA: National Council of Teachers of Mathematics. Carson, C.L., & Day, J. (1995). Annual report on promising practices: How the algebra project eliminates the “game of signs” with negative numbers. San Francisco: Far West Lab for Educational Research and Development. (ERIC Document Reproduction Service No. ED 394 828). Confrey, J. (1994). Splitting, similarity, and the rate of change: New approaches to multiplication and exponential functions. In G.Harel & J.Confrey (Eds.), The development of multiplicative reasoning in the learning of mathematics (pp. 293–332). Albany: State University of New York Press. Confrey, J. (1995). Student voice in examining “splitting” as an approach to ratio, proportion, and fractions. In L.Meira & D.Carraher (Eds.), Proceedings of the nineteenth international conference for the Psychology of Mathematics Education (Vol. 1, pp. 3–29). Recife, Brazil: Federal University of Pernambuco. (ERIC Document Reproduction Service No. ED 411 134). Cramer, K., Behr, M., & Bezuk, N. (1989). Proportional relationships and unit rates. Mathematics Teacher, 82, 537–544.
OCR for page 251
Adding + It Up: Helping Children Learn Mathematics Cramer, K., Behr, M., Post, T., & Lesh, R. (1997). Rational Numbers Project: Fraction lessons for the middle grades, level 1 and level 2. Dubuque, IA: Kendall Hunt. Cramer, K., Post, T., & Currier, S. (1993). Learning and teaching ratio and proportion: Research implications. In D.T.Owens (Ed.), Research ideas for the classroom: Middle grades mathematics (pp. 159–178). New York: Macmillan. Cramer, K., Post, T., Henry, A., & Jeffers-Ruff, L. (in press). Initial fraction learning of fourth and fifth graders using a commercial textbook or the Rational Number Project Curriculum. Journal for Research in Mathematics Education. Crowley, M.L., & Dunn, K.A. (1985). On multiplying negative numbers. Mathematics Teacher, 78, 252–256. Davydov, V.V., & Tsvetkovich, A.H. (1991). On the objective origin of the concept of fractions. Focus on Learning Problems in Mathematics, 13, 13–64. Davis, R.B. (1988). Is a “percent” a number?” Journal of Mathematical Behavior, 7(1), 299–302. Davis, R.B. (1990). Discovery learning and constructivism. In R.B.Davis, C.A.Maher, & N.Noddings, (Eds.), Constructivist views on the teaching and learning of mathematics (Journal for Research in Mathematics Education Monograph No. 4, pp. 93–106). Reston, VA: National Council of Teachers of Mathematics. Dixon, J.A., & Moore, C.F. (1996). The developmental role of intuitive principles in choosing mathematical strategies. Developmental Psychology, 32, 241–253. Duncan, R.K., & Saunders, W.J. (1980). Introduction to integers. Instructor, 90(3), 152– 154. Empson, S.B. (1999). Equal sharing and shared meaning: The development of fraction concepts in a first-grade classroom. Cognition and Instruction, 17, 283–342. English, L.D. (Ed.). (1997). Mathematical reasoning: Analogies, metaphors, and images. Mahwah, NJ: Erlbaum. Fischbein, E. (1987). Intuition in science and mathematics. Dordrecht, The Netherlands: Reidel. Fischbein, E., Deri, M., Nello, M.S., & Marino, M.S. (1985). The role of implicit models in solving problems in multiplication and division . Journal for Research in Mathematics Education, 16, 3–17. Harel, G., & Confrey, J. (1994). The development of multiplicative reasoning in the learning of mathematics. Albany: State University of New York Press. Hativa, N., & Cohen, D. (1995). Self learning of negative number concepts by lower division elementary students through solving computer-provided numerical problems. Educational Studies in Mathematics, 28, 401–431. Heller, P., Ahlgren, A., Post, T., Behr, M., & Lesh, R. (1989). Proportional reasoning: The effect of two concept variables, rate type and problem setting. Journal for Research in Science Teaching, 26, 205–220. Hiebert, J., & Tonnessen, L.H. (1978). Development of the fraction concept in two physical contexts: An exploratory investigation. Journal for Research in Mathematics Education, 9, 374–378. Hiebert, J., & Wearne, D. (1985). A model of students’ decimal computation procedures. Cognition and Instruction, 2, 175–205.
OCR for page 252
Adding + It Up: Helping Children Learn Mathematics Hiebert, J., & Wearne, D. (1986). Procedures over concepts: The acquisition of decimal number knowledge. In J.Hiebert (Ed.), Conceptual and procedural knowledge: The case of mathematics (pp. 199–223). Hillsdale, NJ: Erlbaum. Hiebert, J., & Wearne, D. (1988). Instruction and cognitive change in mathematics. Educational Psychologist, 23, 105–117. Hiebert, J., Wearne, D., & Taber, S. (1991). Fourth graders’ gradual construction of decimal fractions during instruction using different physical representations. Elementary School Journal, 91, 321–341. Huinker, D. (1998). Letting fraction algorithms emerge through problem solving. In L. J.Morrow & M.J.Kenney (Eds.), The teaching and learning of algorithms in school mathematics (1998 Yearbook of the National Council of Teachers of Mathematics, pp. 170–182). Reston, VA:NCTM. Human, P., & Murray, H. (1987). Non-concrete approaches to integer arithmetic. In J.C. Bergeron, N.Herscovics, & C.Kieran (Eds.), Proceedings of the Eleventh International Conference for the Psychology of Mathematics Education (vol. 2, pp. 437–443). Montreal: University of Montreal. (ERIC Document Reproduction Service No. ED 383 532) Hunting, R.P. (1983). Alan: A case study of knowledge of units and performance with fractions . Journal for Research in Mathematics Education, 14, 182–197. Kaput, J.J., & West, M.M. (1994). Missing-value proportional reasoning problems: Factors affecting informal reasoning patterns. In G. Harel & J. Confrey (Eds.), The development of multiplicative reasoning in the learning of mathematics (pp. 235–287). Albany: State University of New York Press. Karplus, R., Pulas S., & Stage E. (1983). Proportional reasoning and early adolescents. In R.Lesh & M.Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 45– 91). New York: Academic Press. Kieren, T.E. (1976). On the mathematical, cognitive and institutional foundations of rational numbers. In R.Lesh & D.Bradbard (Eds.), Number and measurement: Papers from a research workshop (pp. 104–144). Columbus OH: ERIC/SMEAC. (ERIC Document Reproduction Service No. ED 120 027). Kieren, T.E. (1980). The rational number construct—Its elements and mechanisms. In T.E.Kieren (Ed.), Recent research on number learning (pp. 125–149). Columbus, OH: ERIC/SMEAC. (ERIC Document Reproduction Service No. ED 212 463). Kieren, T.E. (1988). Personal knowledge of rational numbers: Its intuitive and formal development . In J.Hiebert & M.Behr (Eds.), Number concepts and operations in the middles grades (pp. 162–181). Reston, VA: National Council of Teachers of Mathematics. Kieren, T.E. (1992). Rational and fractional numbers as mathematical and personal knowledge; Implications for curriculum and instruction. In G.Leinhardt & R.T. Putnam (Eds.), Analysis of arithmetic for mathematics teaching (pp. 323–371). Hillsdale, NJ: Erlbaum. Kouba, V.L., Zawojewski, J.S., & Strutchens, M.E. (1997). What do students know about numbers and operations? In P.A.Kenney & E.A.Silver (Eds.), Results from the sixth mathematics assessment of the National Assessment of Educational Progress (pp. 33– 60). Reston, VA: National Council of Teachers of Mathematics. Kuchemann, D. (1980). Children’s understanding of integers. Mathematics in School, 9, 31–32.
OCR for page 253
Adding + It Up: Helping Children Learn Mathematics Lamon, S.J. (1993). Ratio and proportion: Connecting content and children’s thinking. Journal for Research in Mathematics Education, 24, 41–61. Lamon, S.J. (1994). Ratio and proportion: Cognitive foundations in unitizing and norming. In G.Harel & J.Confrey (Eds.), The development of multiplicative reasoning in the learning of mathematics (pp. 89–120). Albany: State University of New York Press. Lamon, S.J. (1995). Ratio and proportion: Elementary didactical phenomenology. In J. T.Sowder & B.P Schappell (Eds.), Providing a foundation for teaching mathematics in the middle grades (pp. 167–198). Albany: State University of New York Press. Langrall, C.W., & Swafford, J.O. (2000). Three balloons for two dollars: Developing proportional reasoning. Mathematics Teaching in the Middle School, 6, 254–261. Lappan, G., & Bouck, M.K. (1998). Developing algorithms for adding and subtracting fractions. In L.J.Morrow & M.J.Kenney (Eds.), The teaching and learning of algorithms in school mathematics (1998 Yearbook of the National Council of Teachers of Mathematics, pp. 183–197). Reston, VA: NCTM. Lappan, G., Fey, J.Fitzgerald, W., Friel, S., & Phillips E. (1996). Bits and pieces 2: Using rational numbers. Palo Alto, CA: Dale Seymour. Lesh, R., Post, T.R., & Behr, M. (1988). Proportional reasoning. In J.Hiebert & M.Behr (Eds.), Number concepts and operations in the middle grades (pp. 93–118). Reston, VA: National Council of Teachers of Mathematics. Liebeck, P. (1990). Scores and forfeits: An intuitive model for integer arithmetic. Educational Studies in Mathematics, 21, 221–239. Mack, N.K. (1990). Learning fractions with understanding: Building on informal knowledge. Journal for Research in Mathematics Education, 21, 16–32. Mack, N.K. (1995). Confounding whole-number and fraction concepts when building on informal knowledge. Journal for Research in Mathematics Education, 26, 422–441. Moreno, R., & Mayer, R.E. (1999). Multimedia-supported metaphors for meaning making in mathematics. Cognition and Instruction, 17, 215–248. Morris, A.L. (in press). A teaching experiment: Introducing fourth graders to fractions from the viewpoint of measuring quantities using Davydov’s mathematics curriculum. Focus on Learning Problems in Mathematics. Moss, J., & Case, R. (1999). Developing children’s understanding of the rational numbers: A new model and an experimental curriculum. Journal for Research in Mathematics Education, 30, 122–147. Mukhopadhyay, S., Resnick, L.B., & Schauble, L. (1990). Social sense-making in mathematics; Children’s ideas of negative numbers. Pittsburgh: University of Pittsburgh, Learning Research and Development Center. (ERIC Document Reproduction Service No. ED 342 632 ). Peck, D.M., & Jencks, S.M. (1981). Conceptual issues in the teaching and learning of fractions. Journal for Research in Mathematics Education, 12, 339–348. Post, T., Behr, M., & Lesh, R. (1988). Proportionality and the development of pre-algebra understanding. In A.F.Coxford & A.P.Schulte (Eds.), The ideas of algebra, K-12 (1988 Yearbook of the National Council of Teachers of Mathematics, pp. 78–90). Reston, VA: NCTM. Post, T.P., Wachsmuth, I., Lesh, R., & Behr, M.J. (1985). Order and equivalence of rational numbers: A cognitive analysis. Journal for Research in Mathematics Education, 16, 18–36.
OCR for page 254
Adding + It Up: Helping Children Learn Mathematics Pothier, Y., & Sawada, D. (1983). Partitioning: The emergence of rational number ideas in young children. Journal for Research in Mathematics Education, 14, 307–317. Pothier, Y., & Sawada, D. (1989). Children’s interpretation of equality in early fraction activities. Focus on Learning Problems in Mathematics, 11(3), 27–38. Resnick, L.B., Nesher, P., Leonard, F., Magone, M., Omanson, S., & Peled, I. (1989). Conceptual bases of arithmetic errors: The case of decimal fractions. Journal for Research in Mathematics Education, 20, 8–27. Schmittau, J. (1993). Connecting mathematical knowledge: A dialectical perspective. Journal of Mathematical Behavior, 12, 179–201. Streefland, L. (1991). Fractions in realistic mathematics education: A paradigm of developmental research. Dordrecht, The Netherlands: Kluwer. Streefland, L. (1993). Fractions: A realistic approach. In T.P.Carpenter, E.Fennema, & T.A.Romberg (Eds.), Rational numbers: An integration of research (pp. 289–325). Hillsdale, NJ: Erlbaum. Thompson, F.M. (1988). Algebraic instruction for the younger child. In A.F.Coxford & A.P.Shulte (Eds.), The ideas of algebra, K-12 (1988 Yearbook of the National Council of Teachers of Mathematics, pp. 69–77). Reston, VA: NCTM. Thompson, P.W., & Dreyfus, T. (1988). Integers as transformations. Journal for Research in Mathematics Education, 19, 115–133. Tourniaire, F., & Pulos, S. (1985). Proportional reasoning: A review of the literature. Educational Studies in Mathematics, 16, 181–204. Vergnaud, G. (1983). Multiplicative structures. In D.Lesh & M.Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 127–174). New York: Academic Press. Wearne, D., & Kouba, V.L. (2000). Rational numbers. In E.A.Silver & P.A.Kenney (Eds.), Results from the seventh mathematics assessment of the National Assessment of Educational Progress (pp. 163–191). Reston, VA: National Council of Teachers of Mathematics.
Representative terms from entire chapter: | http://books.nap.edu/openbook.php?record_id=9822&page=231 | 13 |
80 | Dialectic (also dialectics and the dialectical method) is a method of argument for resolving disagreement that has been central to European and Indian philosophy since antiquity. The word dialectic originated in ancient Greece, and was made popular by Plato in the Socratic dialogues. The dialectical method is discourse between two or more people holding different points of view about a subject, who wish to establish the truth of the matter guided by reasoned arguments.
The term dialectics is not synonymous with the term debate. While in theory debaters are not necessarily emotionally invested in their point of view, in practice debaters frequently display an emotional commitment that may cloud rational judgement. Debates are won through a combination of persuading the opponent; proving one's argument correct; or proving the opponent's argument incorrect. Debates do not necessarily require promptly identifying a clear winner or loser; however clear winners are frequently determined by either a judge, jury, or by group consensus. The term dialectics is also not synonymous with the term rhetoric, a method or art of discourse that seeks to persuade, inform, or motivate an audience. Concepts, like "logos" or rational appeal, "pathos" or emotional appeal, and "ethos" or ethical appeal, are intentionally used by rhetoricians to persuade an audience.
The Sophists taught aretē (Greek: ἀρετή, quality, excellence) as the highest value, and the determinant of one's actions in life. The Sophists taught artistic quality in oratory (motivation via speech) as a manner of demonstrating one's aretē. Oratory was taught as an art form, used to please and to influence other people via excellent speech; nonetheless, the Sophists taught the pupil to seek aretē in all endeavours, not solely in oratory.
Socrates favoured truth as the highest value, proposing that it could be discovered through reason and logic in discussion: ergo, dialectic. Socrates valued rationality (appealing to logic, not emotion) as the proper means for persuasion, the discovery of truth, and the determinant for one's actions. To Socrates, truth, not aretē, was the greater good, and each person should, above all else, seek truth to guide one's life. Therefore, Socrates opposed the Sophists and their teaching of rhetoric as art and as emotional oratory requiring neither logic nor proof. Different forms of dialectical reasoning have emerged throughout history from the Indosphere (Greater India) and the West (Europe). These forms include the Socratic method, Hindu, Buddhist, Medieval, Hegelian dialectics, Marxist, Talmudic, and Neo-orthodoxy.
The purpose of the dialectic method of reasoning is resolution of disagreement through rational discussion, and, ultimately, the search for truth. One way to proceed—the Socratic method—is to show that a given hypothesis (with other admissions) leads to a contradiction; thus, forcing the withdrawal of the hypothesis as a candidate for truth (see reductio ad absurdum). Another dialectical resolution of disagreement is by denying a presupposition of the contending thesis and antithesis; thereby, proceeding to sublation (transcendence) to synthesis, a third thesis.
It is also possible that the rejection of the participants' presuppositions is resisted, which then might generate a second-order controversy.
Fichtean Dialectics (Hegelian Dialectics) is based upon four concepts:
- Everything is transient and finite, existing in the medium of time.
- Everything is composed of contradictions (opposing forces).
- Gradual changes lead to crises, turning points when one force overcomes its opponent force (quantitative change leads to qualitative change).
- Change is helical (spiral), not circular (negation of the negation).
The concept of dialectic existed in the philosophy of Heraclitus of Ephesus, who proposed that everything is in constant change, as a result of inner strife and opposition. Hence, the history of the dialectical method is the history of philosophy.
Western dialectical forms
According to Kant, the ancient Greeks used the word "dialectic" to signify the logic of false appearance or semblance. To the ancients, "it was nothing but the logic of illusion. It was a sophistic art of giving to one’s ignorance, indeed even to one’s intentional tricks, the outward appearance of truth, by imitating the thorough, accurate method which logic always requires, and by using its topic as a cloak for every empty assertion."
In classical philosophy, dialectic (Greek: διαλεκτική) is a form of reasoning based upon dialogue of arguments and counter-arguments, advocating propositions (theses) and counter-propositions (antitheses). The outcome of such a dialectic might be the refutation of a relevant proposition, or of a synthesis, or a combination of the opposing assertions, or a qualitative improvement of the dialogue.
Moreover, the term "dialectic" owes much of its prestige to its role in the philosophies of Socrates and Plato, in the Greek Classical period (5th to 4th centuries BCE). Aristotle said that it was the pre-Socratic philosopher Zeno of Elea who invented dialectic, of which the dialogues of Plato are the examples of the Socratic dialectical method.
In Plato's dialogues and other Socratic dialogues, Socrates attempts to examine someone's beliefs, at times even first principles or premises by which we all reason and argue. Socrates typically argues by cross-examining his interlocutor's claims and premises in order to draw out a contradiction or inconsistency among them. According to Plato, the rational detection of error amounts to finding the proof of the antithesis. However, important as this objective is, the principal aim of Socratic activity seems to be to improve the soul of his interlocutors, by freeing them from unrecognized errors.
For example, in the Euthyphro, Socrates asks Euthyphro to provide a definition of piety. Euthyphro replies that the pious is that which is loved by the gods. But, Socrates also has Euthyphro agreeing that the gods are quarrelsome and their quarrels, like human quarrels, concern objects of love or hatred. Therefore, Socrates reasons, at least one thing exists that certain gods love but other gods hate. Again, Euthyphro agrees. Socrates concludes that if Euthyphro's definition of piety is acceptable, then there must exist at least one thing that is both pious and impious (as it is both loved and hated by the gods)—which Euthyphro admits is absurd. Thus, Euthyphro is brought to a realization by this dialectical method that his definition of piety is not sufficiently meaningful.
There is another interpretation of the dialectic, as a method of intuition suggested in The Republic. Simon Blackburn writes that the dialectic in this sense is used to understand "the total process of enlightenment, whereby the philosopher is educated so as to achieve knowledge of the supreme good, the Form of the Good”.
Based mainly on Aristotle, the first medieval philosopher to work on dialectics was Boethius. After him, many scholastic philosophers also made use of dialectics in their works, such as Abelard, William of Sherwood, Garlandus Compotista, Walter Burley, Roger Swyneshed and William of Ockham.
This dialectic was formed as follows:
- The Question to be determined
- The principal objections to the question
- An argument in favor of the Question, traditionally a single argument ("On the contrary..")
- The determination of the Question after weighing the evidence. ("I answer that...")
- The replies to each objection
The concept of dialectics was given new life by Georg Wilhelm Friedrich Hegel (following Fichte), whose dialectically dynamic model of nature and of history made it, as it were, a fundamental aspect of the nature of reality (instead of regarding the contradictions into which dialectics leads as a sign of the sterility of the dialectical method, as Immanuel Kant tended to do in his Critique of Pure Reason). In the mid-19th century, the concept of "dialectic" was appropriated by Karl Marx (see, for example, Das Kapital, published in 1867) and Friedrich Engels and retooled in a non-idealist manner, becoming a crucial notion in their philosophy of dialectical materialism. Thus this concept has played a prominent role on the world stage and in world history. In contemporary polemics, "dialectics" may also refer to an understanding of how we can or should perceive the world (epistemology); an assertion that the nature of the world outside one's perception is interconnected, contradictory, and dynamic (ontology); or it can refer to a method of presentation of ideas and conclusions (discourse). According to Hegel, "dialectic" is the method by which human history unfolds; that is to say, history progresses as a dialectical process.
Hegelian dialectic, usually presented in a threefold manner, was stated by Heinrich Moritz Chalybäus as comprising three dialectical stages of development: a thesis, giving rise to its reaction, an antithesis, which contradicts or negates the thesis, and the tension between the two being resolved by means of a synthesis. Although this model is often named after Hegel, he himself never used that specific formulation. Hegel ascribed that terminology to Kant. Carrying on Kant's work, Fichte greatly elaborated on the synthesis model, and popularized it.
On the other hand, Hegel did use a three-valued logical model that is very similar to the antithesis model, but Hegel's most usual terms were: Abstract-Negative-Concrete. Hegel used this writing model as a backbone to accompany his points in many of his works.
The formula, thesis-antithesis-synthesis, does not explain why the thesis requires an Antithesis. However, the formula, abstract-negative-concrete, suggests a flaw, or perhaps an incomplete-ness, in any initial thesis—it is too abstract and lacks the negative of trial, error and experience. For Hegel, the concrete, the synthesis, the absolute, must always pass through the phase of the negative, in the journey to completion, that is, mediation. This is the actual essence of what is popularly called Hegelian Dialectics.
To describe the activity of overcoming the negative, Hegel also often used the term Aufhebung, variously translated into English as "sublation" or "overcoming," to conceive of the working of the dialectic. Roughly, the term indicates preserving the useful portion of an idea, thing, society, etc., while moving beyond its limitations. (Jacques Derrida's preferred French translation of the term was relever).
In the Logic, for instance, Hegel describes a dialectic of existence: first, existence must be posited as pure Being (Sein); but pure Being, upon examination, is found to be indistinguishable from Nothing (Nichts). When it is realized that what is coming into being is, at the same time, also returning to nothing (in life, for example, one's living is also a dying), both Being and Nothing are united as Becoming.
As in the Socratic dialectic, Hegel claimed to proceed by making implicit contradictions explicit: each stage of the process is the product of contradictions inherent or implicit in the preceding stage. For Hegel, the whole of history is one tremendous dialectic, major stages of which chart a progression from self-alienation as slavery to self-unification and realization as the rational, constitutional state of free and equal citizens. The Hegelian dialectic cannot be mechanically applied for any chosen thesis. Critics argue that the selection of any antithesis, other than the logical negation of the thesis, is subjective. Then, if the logical negation is used as the antithesis, there is no rigorous way to derive a synthesis. In practice, when an antithesis is selected to suit the user's subjective purpose, the resulting "contradictions" are rhetorical, not logical, and the resulting synthesis is not rigorously defensible against a multitude of other possible syntheses. The problem with the Fichtean "Thesis-Antithesis-Synthesis" model is that it implies that contradictions or negations come from outside of things. Hegel's point is that they are inherent in and internal to things. This conception of dialectics derives ultimately from Heraclitus.
Hegel has outlined that the purpose of dialectics is "to study things in their own being and movement and thus to demonstrate the finitude of the partial categories of understanding"
One important dialectical principle for Hegel is the transition from quantity to quality, which he terms the Measure. The measure is the qualitative quantum, the quantum is the existence of quantity.
"The identity between quantity and quality, which is found in Measure, is at first only implicit, and not yet explicitly realised. In other words, these two categories, which unite in Measure, each claim an independent authority. On the one hand, the quantitative features of existence may be altered, without affecting its quality. On the other hand, this increase and diminution, immaterial though it be, has its limit, by exceeding which the quality suffers change. [...] But if the quantity present in measure exceeds a certain limit, the quality corresponding to it is also put in abeyance. This however is not a negation of quality altogether, but only of this definite quality, the place of which is at once occupied by another. This process of measure, which appears alternately as a mere change in quantity, and then as a sudden revulsion of quantity into quality, may be envisaged under the figure of a nodal (knotted) line".
As an example, Hegel mentions the states of aggregation of water: "Thus the temperature of water is, in the first place, a point of no consequence in respect of its liquidity: still with the increase or diminution of the temperature of the liquid water, there comes a point where this state of cohesion suffers a qualitative change, and the water is converted into steam or ice". As other examples Hegel mentions the reaching of a point where a single additional grain makes a heap of wheat; or where the bald-tail is produced, if we continue plucking out single hairs.
Another important principle for Hegel is the negation of the negation, which he also terms Aufhebung (sublation): Something is only what it is in its relation to another, but by the negation of the negation this something incorporates the other into itself. The dialectical movement involves two moments that negate each other, something and its other. As a result of the negation of the negation, "something becomes its other; this other is itself something; therefore it likewise becomes an other, and so on ad infinitum". Something in its passage into other only joins with itself, it is self-related. In becoming there are two moments: coming-to-be and ceasing-to-be: by sublation, i.e., negation of the negation, being passes over into nothing, it ceases to be, but something new shows up, is coming to be. What is sublated (aufgehoben) on the one hand ceases to be and is put to an end, but on the other hand it is preserved and maintained. In dialectics, a totality transforms itself; it is self-related, then self-forgetful, relieving the original tension.
|Part of a series on|
The mystification which dialectic suffers in Hegel’s hands, by no means prevents him from being the first to present its general form of working in a comprehensive and conscious manner. With him it is standing on its head. It must be turned right side up again, if you would discover the rational kernel within the mystical shell.
In contradiction to Hegelian idealism, Karl Marx presented Dialectical materialism (Marxist dialectics):
My dialectic method is not only different from the Hegelian, but is its direct opposite. To Hegel, the life-process of the human brain, i.e. the process of thinking, which, under the name of ‘the Idea’, he even transforms into an independent subject, is the demiurgos of the real world, and the real world is only the external, phenomenal form of ‘the Idea’. With me, on the contrary, the ideal is nothing else than the material world reflected by the human mind, and translated into forms of thought. (Capital, Afterword, Second German Ed., Moscow, 1970, vol. 1, p. 29).
In Marxism, the dialectical method of historical study became intertwined with historical materialism, the school of thought exemplified by the works of Marx, Engels, and Vladimir Lenin. In the USSR, under Joseph Stalin, Marxist dialectics became "diamat" (short for dialectical materialism), a theory emphasizing the primacy of the material way of life, social "praxis," over all forms of social consciousness and the secondary, dependent character of the "ideal." The term "dialectical materialism" was coined by the 19th-century social theorist Joseph Dietzgen who used the theory to explain the nature of socialism and social development. The original populariser of Marxism in Russia, Georgi Plekhanov used the terms "dialectical materialism" and "historical materialism" interchangeably. For Lenin, the primary feature of Marx's "dialectical materialism" (Lenin's term) was its application of materialist philosophy to history and social sciences. Lenin's main input in the philosophy of dialectical materialism was his theory of reflection, which presented human consciousness as a dynamic reflection of the objective material world that fully shapes its contents and structure. Later, Stalin's works on the subject established a rigid and formalistic division of Marxist-Leninist theory in the dialectical materialism and historical materialism parts. While the first was supposed to be the key method and theory of the philosophy of nature, the second was the Soviet version of the philosophy of history.
A dialectical method was fundamental to Marxist politics, e.g., the works of Karl Korsch, Georg Lukács and certain members of the Frankfurt School. Soviet academics, notably Evald Ilyenkov and Zaid Orudzhev, continued pursuing unorthodox philosophic study of Marxist dialectics; likewise in the West, notably the philosopher Bertell Ollman at New York University.
Friedrich Engels proposed that Nature is dialectical, thus, in Anti-Dühring he said that the negation of negation is:
In Dialectics of Nature, Engels said:
Probably the same gentlemen who up to now have decried the transformation of quantity into quality as mysticism and incomprehensible transcendentalism will now declare that it is indeed something quite self-evident, trivial, and commonplace, which they have long employed, and so they have been taught nothing new. But to have formulated for the first time in its universally valid form a general law of development of Nature, society, and thought, will always remain an act of historic importance.
Marxist dialectics is exemplified in Das Kapital (Capital), which outlines two central theories: (i) surplus value and (ii) the materialist conception of history; Marx explains dialectical materialism:
In its rational form, it is a scandal and abomination to bourgeoisdom and its doctrinaire professors, because it includes in its comprehension an affirmative recognition of the existing state of things, at the same time, also, the recognition of the negation of that state, of its inevitable breaking up; because it regards every historically developed social form as in fluid movement, and therefore takes into account its transient nature not less than its momentary existence; because it lets nothing impose upon it, and is in its essence critical and revolutionary.
Class struggle is the central contradiction to be resolved by Marxist dialectics, because of its central role in the social and political lives of a society. Nonetheless, Marx and Marxists developed the concept of class struggle to comprehend the dialectical contradictions between mental and manual labor, and between town and country. Hence, philosophic contradiction is central to the development of dialectics — the progress from quantity to quality, the acceleration of gradual social change; the negation of the initial development of the status quo; the negation of that negation; and the high-level recurrence of features of the original status quo. In the USSR, Progress Publishers issued anthologies of dialectical materialism by Lenin, wherein he also quotes Marx and Engels:
As the most comprehensive and profound doctrine of development, and the richest in content, Hegelian dialectics was considered by Marx and Engels the greatest achievement of classical German philosophy.... “The great basic thought”, Engels writes, “that the world is not to be comprehended as a complex of ready-made things, but as a complex of processes, in which the things, apparently stable no less than their mind images in our heads, the concepts, go through an uninterrupted change of coming into being and passing away... this great fundamental thought has, especially since the time of Hegel, so thoroughly permeated ordinary consciousness that, in its generality, it is now scarcely ever contradicted. But, to acknowledge this fundamental thought in words, and to apply it in reality in detail to each domain of investigation, are two different things.... For dialectical philosophy nothing is final, absolute, sacred. It reveals the transitory character of everything and in everything; nothing can endure before it, except the uninterrupted process of becoming and of passing away, of endless ascendancy from the lower to the higher. And dialectical philosophy, itself, is nothing more than the mere reflection of this process in the thinking brain.” Thus, according to Marx, dialectics is “the science of the general laws of motion both of the external world and of human thought”.
Lenin describes his dialectical understanding of the concept of development:
A development that repeats, as it were, stages that have already been passed, but repeats them in a different way, on a higher basis (“the negation of the negation”), a development, so to speak, that proceeds in spirals, not in a straight line; a development by leaps, catastrophes, and revolutions; “breaks in continuity”; the transformation of quantity into quality; inner impulses towards development, imparted by the contradiction and conflict of the various forces and tendencies acting on a given body, or within a given phenomenon, or within a given society; the interdependence and the closest and indissoluble connection between all aspects of any phenomenon (history constantly revealing ever new aspects), a connection that provides a uniform, and universal process of motion, one that follows definite laws — these are some of the features of dialectics as a doctrine of development that is richer than the conventional one.
It is possible that I could disgrace myself. But there's always a bit of Dialectic to help out. I have naturally expressed my statements so that I am also right if the opposite thing happens.
Indian forms of dialectic
Indian continental debate: an intra- and inter-Dharmic dialectic
Anacker (2005: p. 20), in the introduction to his translation of seven works by the Buddhist monk Vasubandhu (fl. 4th century), a famed dialectician of the Gupta Empire, contextualizes the prestige of dialectic and cut-throat debate in classical India and makes references to the possibly apocryphal story of the banishment of Moheyan post-debate with Kamalaśīla (fl. 713–763):
Philosophical debating was in classical India often a spectator-sport, much as contests of poetry-improvisation were in Germany in its High Middle Ages, and as they still are in the Telugu country today. The king himself was often the judge at these debates, and loss to an opponent could have serious consequences. To take an atrociously extreme example, when the Tamil Śaivite Ñānasambandar Nāyanār defeated the Jain ācāryas in Madurai before the Pāṇḍya King Māravarman Avaniśūlāmani (620-645) this debate is said to have resulted in the impalement of 8000 Jains, an event still celebrated in the Mīnāksi Temple of Madurai today. Usually, the results were not so drastic; they could mean formal recognition by the defeated side of the superiority of the winning party, forced conversions, or, as in the case of the Council of Lhasa, which was conducted by Indians, banishment of the losers.
While Western philosophy traces dialectics to ancient Greek thought of Socrates and Plato, the idea of tension between two opposing forces leading to synthesis is much older and present in Hindu Philosophy. Indian philosophy, for the most part subsumed within the Indian religions, has an ancient tradition of dialectic polemics. The two complements, "purusha" (the active cause) and the "prakriti" (the passive nature) brings everything into existence. They follow the "rta", the Dharma (Universal Law of Nature).
Anekantavada and Syadvada are the sophisticated dialectic traditions developed by the Jains to arrive at truth. As per Jainism, the truth or the reality is perceived differently from different points of view, and that no single point of view is the complete truth. Jain doctrine of Anekantavada states that an object has infinite modes of existence and qualities and, as such, they cannot be completely perceived in all its aspects and manifestations, due to the inherent limitations of being human. Only the Kevalis—the omniscient beings—can comprehend the object in all its aspects and manifestations, and that all others are capable of knowing only a part of it. Consequently, no one view can claim to represent the absolute truth. According to Jains, the ultimate principle should always be logical and no principle can be devoid of logic or reason. Thus one finds in the Jain texts, deliberative exhortations on any subject in all its facts, may they be constructive or obstructive, inferential or analytical, enlightening or destructive.
Syādvāda is a theory of conditioned predication that provides an expression to anekānta by recommending that epithet Syād be attached to every expression. Syādvāda is not only an extension of Anekānta ontology, but a separate system of logic capable of standing on its own force. The Sanskrit etymological root of the term Syād is "perhaps" or "maybe", but in context of syādvāda, it means "in some ways" or "from a perspective." As reality is complex, no single proposition can express the nature of reality fully. Thus the term "syāt" should be prefixed before each proposition giving it a conditional point of view and thus removing any dogmatism in the statement. Since it ensures that each statement is expressed from seven different conditional and relative view points or propositions, it is known as theory of conditioned predication. These seven propositions also known as saptabhangi are:
- syād-asti: "in some ways it is"
- syād-nāsti: "in some ways it is not"
- syād-asti-nāsti: "in some ways it is and it is not"
- syād-asti-avaktavyaḥ: "in some ways it is and it is indescribable"
- syād-nāsti-avaktavyaḥ: "in some ways it is not and it is indescribable"
- syād-asti-nāsti-avaktavyaḥ: "in some ways it is, it is not and it is indescribable"
- syād-avaktavyaḥ: "in some ways it is indescribable"
Buddhism has developed sophisticated, and sometimes highly institutionalized traditions of dialectics during its long history. Nalanda University, and later the Gelugpa Buddhism of Tibet, are examples. The historical development and clarification of Buddhist doctrine and polemics, through dialectics and formal debate, is well documented. Buddhist doctrine was rigorously critiqued (though not ultimately refuted) in the 2nd century by Nagarjuna, whose uncompromisingly logical approach to the realisation of truth, became the basis for the development of a vital stream of Buddhist thought. This dialectical approach of Buddhism, to the elucidation and articulation of an account of the Cosmos as the truth it really is, became known as the Perfection of Wisdom and was later developed by other notable thinkers, such as Dignaga and Dharmakirti (between 500 and 700). The dialectical method of truth-seeking is evident throughout the traditions of Madhyamaka, Yogacara, and Tantric Buddhism. Trisong Detsen, and later Je Tsongkhapa, championed the value of dialectic and of formalised training in debate in Tibet.
Neo-orthodoxy, in Europe also known as theology of crisis and dialectical theology, is an approach to theology in Protestantism that was developed in the aftermath of the First World War (1914–1918). It is characterized as a reaction against doctrines of 19th-century liberal theology and a more positive reevaluation of the teachings of the Reformation, much of which had been in decline (especially in western Europe) since the late 18th century. It is primarily associated with two Swiss professors and pastors, Karl Barth (1886–1968) and Emil Brunner (1899–1966), even though Barth himself expressed his unease in the use of the term.
Dialectical method and dualism
Another way to understand dialectics is to view it as a method of thinking to overcome formal dualism and monistic reductionism. For example, formal dualism regards the opposites as mutually exclusive entities, whilst monism finds each to be an epiphenomenon of the other. Dialectical thinking rejects both views. The dialectical method requires focus on both at the same time. It looks for a transcendence of the opposites entailing a leap of the imagination to a higher level, which (1) provides justification for rejecting both alternatives as false and/or (2) helps elucidate a real but previously veiled integral relationship between apparent opposites that have been kept apart and regarded as distinct. For example, the superposition principle of quantum physics can be explained using the dialectical method of thinking—likewise the example below from dialectical biology. Such examples showing the relationship of the dialectic method of thinking to the scientific method to a large part negates the criticism of Popper (see text below) that the two are mutually exclusive. The dialectic method also examines false alternatives presented by formal dualism (materialism vs idealism; rationalism vs empiricism; mind vs body, etc.) and looks for ways to transcend the opposites and form synthesis. In the dialectical method, both have something in common, and understanding of the parts requires understanding their relationship with the whole system. The dialectical method thus views the whole of reality as an evolving process.
Some philosophers have offered critiques of dialectic, and it can even be said that hostility or receptivity to dialectics is one of the things that divides 20th-century Anglo-American philosophy from the so-called "continental" tradition, a divide that only a few contemporary philosophers (among them, G.H. von Wright, Paul Ricoeur, Hans-Georg Gadamer, Richard Rorty, Charles Taylor) have ventured to bridge.
It is generally thought dialectics has become central to "Continental" philosophy, while it plays no part in "Anglo-American" philosophy. In other words, on the continent of Europe, dialectics has entered intellectual culture as what might be called a legitimate part of thought and philosophy, whereas in America and Britain, the dialectic plays no discernible part in the intellectual culture, which instead tends toward positivism. A prime example of the European tradition is Jean-Paul Sartre's Critique of Dialectical Reason, which is very different from the works of Popper, whose philosophy was for a time highly influential in the UK where he resided (see below). Sartre states:
- "Existentialism, like Marxism, addresses itself to experience in order to discover there concrete syntheses. It can conceive of these syntheses only within a moving, dialectical totalisation, which is nothing else but history or—from the strictly cultural point of view adopted here—'philosophy-becoming-the world'."
Karl Popper has attacked the dialectic repeatedly. In 1937 he wrote and delivered a paper entitled "What Is Dialectic?" in which he attacked the dialectical method for its willingness "to put up with contradictions". Popper concluded the essay with these words: "The whole development of dialectic should be a warning against the dangers inherent in philosophical system-building. It should remind us that philosophy should not be made a basis for any sort of scientific system and that philosophers should be much more modest in their claims. One task which they can fulfill quite usefully is the study of the critical methods of science" (Ibid., p. 335).
In chapter 12 of volume 2 of The Open Society and Its Enemies (1944; 5th rev. ed., 1966) Popper unleashed a famous attack on Hegelian dialectics, in which he held that Hegel's thought (unjustly, in the view of some philosophers, such as Walter Kaufmann,) was to some degree responsible for facilitating the rise of fascism in Europe by encouraging and justifying irrationalism. In section 17 of his 1961 "addenda" to The Open Society, entitled "Facts, Standards and Truth: A Further Criticism of Relativism," Popper refused to moderate his criticism of the Hegelian dialectic, arguing that it "played a major role in the downfall of the liberal movement in Germany,... by contributing to historicism and to an identification of might and right, encouraged totalitarian modes of thought. . . . [and] undermined and eventually lowered the traditional standards of intellectual responsibility and honesty".
In the past few decades, European and American logicians have attempted to provide mathematical foundations for dialectical logic or argument. There had been pre-formal treatises on argument and dialectic, from authors such as Stephen Toulmin (The Uses of Argument), Nicholas Rescher (Dialectics), and van Eemeren and Grootendorst (Pragma-dialectics). One can include the communities of informal logic and paraconsistent logic. However, building on theories of defeasible reasoning (see John L. Pollock), systems have been built that define well-formedness of arguments, rules governing the process of introducing arguments based on fixed assumptions, and rules for shifting burden. Many of these logics appear in the special area of artificial intelligence and law, though the computer scientists' interest in formalizing dialectic originates in a desire to build decision support and computer-supported collaborative work systems.
- Chinese philosophy
- Critical theory (Frankfurt School)
- Dialectic process vs. dialogic process
- Dialectical behavioral therapy
- Dialectical research
- False dilemma
- Gotthard Günther
- Reflective equilibrium
- Relational dialectics
- Strange loop
- Universal dialectic
- Interdisciplinary concepts
- The Republic (Plato), 348b
- Corbett, Edward P. J.; Robert J. Connors (1999). Classical Rhetoric For the Modern Student (4th ed.). New York: Oxford University Press. p. 1. ISBN 9780195115420.
- Corbett, Edward P. J.; Robert J. Connors (1999). Classical Rhetoric For the Modern Student (4th ed.). New York: Oxford University Press. p. 18. ISBN 9780195115420.
- see Gorgias, 449B: "Socrates: Would you be willing then, Gorgias, to continue the discussion as we are now doing [Dialectic], by way of question and answer, and to put off to another occasion the (emotional) speeches [Rhetoric] that [the Sophist] Polus began?"
- Pinto, R. C. (2001). Argument, inference and dialectic: collected papers on informal logic. Argumentation library, vol. 4. Dordrecht:Kluwer Academic. pp. 138–139.
- Eemeren, F. H. v. (2003). Anyone who has a view: theoretical contributions to the study of argumentation. Argumentation library, vol. 8. Dordrecht:Kluwer Academic. p. 92.
- The musicologist Rose Rosengard Subotnick gives this example: "A question posed in a Fred Friendly Seminar entitled Hard Drugs, Hard Choices: The Crisis Beyond Our Borders (aired on WNET on February 26, 1990), illustrates that others, too, seem to find this dynamic enlightening: 'Are our lives so barren because we use drugs? Or do we use drugs because our lives are so barren?' The question is dialectical to the extent that it enables one to grasp the two opposed priorities as simultaneously valid."
- Jon Mills (2005). Treating attachment pathology. Jason Aronson. pp. 159–166. ISBN 978-0-7657-0132-9. Retrieved 8 May 2011.
- Herbermann, C. G. (1913) The Catholic encyclopedia: an international work of reference on the constitution, doctrine, and history of the Catholic church. New York: The Encyclopedia press, inc. Page 160
- Howard Ll. Williams, Hegel, Heraclitus, and Marx's Dialectic. Harvester Wheatsheaf 1989. 256 pages. ISBN 0-7450-0527-6
- Denton Jaques Snider, Ancient European Philosophy: The History of Greek Philosophy Psychologically Treated. Sigma publishing co. 1903. 730 pages. Pages 116-119.
- Cassin, Barbara (ed.), Vocabulaire européen des philosophies [Paris: Le Robert & Seuil, 2004], p. 306, trans. M.K. Jensen
- Critique of Pure Reason, A 61
- Ayer, A. J., & O'Grady, J. (1992). A Dictionary of Philosophical Quotations. Oxford, UK: Blackwell Publishers. Page 484.
- McTaggart, J. M. E. (1964). A commentary on Hegel's logic. New York: Russell & Russell. p. 11
- ([fr. 65], Diog. IX 25ff and VIII 57)
- Vlastos, G., Burnyeat, M. (Ed.) (1994) Socratic Studies, Cambridge U.P. ISBN 0-521-44735-6 Ch. 1
- Popper, K. (1962) The Open Society and its Enemies, Volume 1, London, Routledge, p. 133.
- Blackburn, Simon. 1996. The Oxford Dictionary of Philosophy. Oxford: Oxford
- Abelson, P. (1965). The seven liberal arts; a study in mediæval culture. New York: Russell & Russell. Page 82.
- Hyman, A., & Walsh, J. J. (1983). Philosophy in the Middle Ages: the Christian, Islamic, and Jewish traditions. Indianapolis: Hackett Pub. Co. Page 164.
- Adler, Mortimer Jerome (2000). "Dialectic". Routledge. Page 4. ISBN 0-415-22550-7
- Herbermann, C. G. (1913). The Catholic encyclopedia: an international work of reference on the constitution, doctrine, and history of the Catholic church. New York: The Encyclopedia press, inc. Page 760–764.
- From topic to tale: logic and narrativity in the Middle Ages, by Eugene Vance,p.43-45
- "Catholic Encyclopedia: Peter Abelard". Newadvent.org. 1907-03-01. Retrieved 2011-11-03.
- William of Sherwood's Introduction to logic, by Norman Kretzmann,p.69-102
- A History of Twelfth-Century Western Philosophy, by Peter Dronke,p.198
- Medieval literary politics: shapes of ideology, by Sheila Delany,p.11
- Nicholson, J. A. (1950). Philosophy of religion. New York: Ronald Press Co. Page 108.
- Kant, I., Guyer, P., & Wood, A. W. (2003). Critique of pure reason. Cambridge: Cambridge University Press. Page 495.
- The Accessible Hegel. Michael Allen Fox. Prometheus Books. 2005. p.43. Also see Hegel's preface to the Phenomenology of Spirit, trans. A.V. Miller (Oxford: Clarendon Press, 1977), secs. 50, 51, p.29. 30.
- See 'La différance' in Margins of philosophy. Alan Bass, translator. University of Chicago Books. 1982. p. 19, fn 23.
- Hegel. "Section in question from Hegel's ''Science of Logic''". Marxists.org. Retrieved 2011-11-03.
- Hegel, Georg Wilhelm Friedrich. 1874. The Logic. Encyclopaedia of the Philosophical Sciences. 2nd Edition. London: Oxford University Press. Note to §81
- Hegel, Georg Wilhelm Friedrich. 1874. The Logic. Encyclopaedia of the Philosophical Sciences. 2nd Edition. London: Oxford University Press. §§107-111
- Hegel, Georg Wilhelm Friedrich. 1874. The Logic. Encyclopaedia of the Philosophical Sciences. 2nd Edition. London: Oxford University Press. §§108-109
- Hegel, Georg Wilhelm Friedrich. 1874. The Logic. Encyclopaedia of the Philosophical Sciences. 2nd Edition. London: Oxford University Press. §108
- Hegel, Georg Wilhelm Friedrich. 1874. The Logic. Encyclopaedia of the Philosophical Sciences. 2nd Edition. London: Oxford University Press. §93
- Hegel, Georg Wilhelm Friedrich. 1874. The Logic. Encyclopaedia of the Philosophical Sciences. 2nd Edition. London: Oxford University Press. §95
- Hegel, Georg Wilhelm Friedrich. 1812. Hegel's Science of Logic. London. Allen & Unwin. §§176-179.
- Hegel, Georg Wilhelm Friedrich. 1812. Hegel's Science of Logic. London. Allen & Unwin. §185.
- Marx, Karl (1873) Capital Afterword to the Second German Edition, Vol. I
- Engels, Frederick, (1877) Anti-Dühring,Part I: Philosophy, XIII. Dialectics. Negation of the Negation.
- "Engels, Frederick, (1883) ''Dialectics of Nature:''II. Dialectics". Marxists.org. Retrieved 2011-11-03.
- Marx, Karl, (1873) Capital Vol. I, Afterword to the Second German Edition.
- Lenin, V.I., On the Question of Dialectics: A Collection, pp. 7-9. Progress Publishers, Moscow, 1980.
- In German: Es ist möglich, daß ich mich blamiere. Indes ist dann immer mit einiger Dialektik zu helfen. Ich habe natürlich meine Aufstellungen so gehalten, daß ich im umgekehrten Fall auch Recht habe, K. Marx, F. Engels, "Works", vol. 29
- Anacker, Stefan (2005, rev. ed.). Seven Works of Vasubandhu: The Buddhist Psychological Doctor. Delhi, India: Motilal Banarsidass. (First published: 1984; Reprinted: 1986, 1994, 1998; Corrected: 2002; Revised: 2005), p.20
- Paul Ernest; Brian Greer; Bharath Sriraman (30 June 2009). Critical issues in mathematics education. IAP. p. 327. ISBN 978-1-60752-039-9. Retrieved 8 July 2011.
- Dundas (2002)
- Koller, John M. (July 2000).
- Duli Chandra Jain (ed.) (1997) p.21
- Hughes, Marilynn (2005) P. 590
- Chatterjea, Tara (2001) p.77-87
- Koller, John M. (July 2000). "Syādvāda as the [[epistemological]] key to the Jaina middle way metaphysics of Anekāntavāda". Philosophy East and West (Honululu) 50 (3): Pp.400–8. ISSN 00318221. Retrieved 2007-10-01. Wikilink embedded in URL title (help)
- Grimes, John (1996) p. 312
- "Original Britinnica online". Retrieved 2008-07-26.
- "Britannica Encyclopedia (online)". Retrieved 2008-07-26.
- "Merriam-Webster Dictionary(online)". Retrieved 2008-07-26.
- "American Heritage Dictionary (online)". Retrieved 2008-07-26.
- See Church Dogmatics III/3, xii.
- Biel, R. and Mu-Jeong Kho (2009) "The Issue of Energy within a Dialectical Approach to the Regulationist Problematique," Recherches & Régulation Working Papers, RR Série ID 2009-1, Association Recherche & Régulation: 1-21.
- Jean-Paul Sartre. "The Search for Method (1st part) Sartre, 1960, in Existentialism from Dostoyevsky to Sartre, transl. Hazel Barnes, Vintage Books". Marxists.org. Retrieved 2011-11-03.
- Karl Popper,Conjectures and Refutations: The Growth of Scientific Knowledge [New York: Basic Books, 1962], p. 316.
- Walter Kaufmann. "kaufmann". Marxists.org. Retrieved 2011-11-03.
- Karl Popper,The Open Society and Its Enemies, 5th rev. ed., vol. 2 [Princeton: Princeton University Press, 1966], p. 395
- See Logical models of argument, CI Chesñevar, AG Maguitman, R Loui - ACM Computing Surveys, 2000 and Logics for defeasible argumentation, H Prakken, Handbook of philosophical logic, 2002 for surveys of work in this area.
- McKeon, R. (1954) "Dialectic and Political Thought and Action." Ethics 65, No. 1: 1-33.
- Postan, M. (1962) "Function and Dialectic in Economic History," The Economic History Review, No. 3.
- Biel, R. and Mu-Jeong Kho (2009) "The Issue of Energy within a Dialectical Approach to the Regulationist Problematique," Recherches & Régulation Working Papers, RR Série ID 2009-1, Association Recherche & Régulation: 1-21.
|Wikiquote has a collection of quotations related to: Dialectic|
- Bertell Ollman Writings on Dialectics: Dialectics.Net
- David Walls, Dialectical Social Science
- Dialectics for Kids | http://en.wikipedia.org/wiki/Dialectical | 13 |
21 | 15.1 Asymmetric Encryption Explained
Asymmetric encryption, often called "public key" encryption, allows Alice to send Bob an encrypted message without a shared secret key; there is a secret key, but only Bob knows what it is, and he does not share it with anyone, including Alice. Figure 15-1 provides an overview of this asymmetric encryption, which works as follows:
Figure 15-1. Asymmetric encryption does not require Alice and Bob to agree on a secret key
The key that Bob sends to Alice is the public key, and the key he keeps to himself is the "private" key; jointly, they form Bob's "key pair."
The most striking aspect of asymmetric encryption is that Alice is not involved in selecting the key: Bob creates a pair of keys without any input or agreement from Alice and simply sends her the public key. Bob retains the private key and keeps it secret. Alice uses an asymmetric algorithm to encrypt a message with Bob's public key and sends him the encrypted data, which he decrypts using the private key.
Asymmetric algorithms include a "key generation" protocol that Bob uses to create his key pair, as shown by Figure 15-2. Following the protocol results in the creation of a pair of keys that have a mathematical relationshipthe exact detail of the protocol and the relationship between the keys is different for each algorithm.
Figure 15-2. Bob uses a key generation protocol to create a new key pair
When we talk about an asymmetric encryption algorithm, we are actually referring to two related functions that perform specific tasks: an encryption function that encrypts a message using a public key, and a decryption function that uses a secret key to decrypt a message encrypted with the corresponding public key.
The encryption function can only encrypt data. Alice cannot decrypt ciphertext that she has created using the encryption function. This means that Bob can send his public key to several people, and each of them can create ciphertext that only Bob's secret key can decrypt, as shown in Figure 15-3.
Figure 15-3. Alice and Anthony are able to use the same public key to create ciphertext that can only be decrypted using Bob's secret key
The one-way nature of the encryption function means that messages created by one sender cannot be read by another (i.e., Alice cannot decrypt the ciphertext that Anthony has created, even though they both have Bob's public key). Bob can give out the public key to anyone who wants to send him a message, and he can even print his public key on his business card and hand it out to anyone who might want to send him a message. He can add the public key to an Internet directory of keys, allowing people Bob has never met to create messages that only he can read.
If Bob suspects that Eve has guessed his private key, he simply creates a new key pair and sends out the new public key to anyone who might send him a message. This is a lot easier than arranging to meet in a secure location to agree on a new symmetric secret key with every person that might want to communicate with him. In practice, the process of changing key pairs is more complex, and we have more to say on this topic in Chapter 17.
Bob's pair of keys allows Alice to send him encrypted messages, but Bob cannot use them to send a message back to Alice because of the one-way nature of the encryption and decryption functions. If Bob needs to send Alice a confidential message, Alice must create her own pair of keys and send the public key to Bob, who can then use the encryption function to create ciphertext that only Alice can decrypt with her private key.
The main limitation of public key encryption is that it is very slow relative to symmetric encryption and is not practical for encrypting large amounts of data. In fact, the most common use of public key encryption is to solve the key agreement problem for symmetric encryption algorithms, which we discuss in more detail in Chapter 17.
In the following sections, we demonstrate how an asymmetric encryption algorithm works. We use the RSA algorithm for our illustration because it is the only one implemented in the .NET Framework. Ronald Rivest, Adi Shamir, and Leonard Adleman created the RSA algorithm in 1977, and the name is the first letter of each of the inventors' last names. The RSA algorithm is the basis for numerous security systems, and remains the most widely used and understood asymmetric algorithm.
15.1.1 Creating Asymmetric Keys
Most asymmetric algorithms use keys that are very large numbers, and the RSA algorithm is no exception. In this section, we demonstrate the RSA key generation protocol and provide you with some general information about the structure and usage of asymmetric keys.
We step through the RSA key generation protocol, using small test values. The protocol is as follows:
You can see how simple it is to create an RSA key pair. Bob sends the value of e (19) and n (713) to Alice and keeps the value of d (139) secret. Most asymmetric encryption algorithms use a similar approach to key generation; we explain the protocol for a different asymmetric algorithm in Section 15.3.
15.1.2 Asymmetric Algorithm Security
Asymmetric algorithms use much longer keys than symmetric algorithms. In our examples, we selected small values to demonstrate the key generation protocol, but the numeric values used in practice contain many hundreds of digits.
Measure asymmetric key lengths in bits. The way to determine the number of bits differs between algorithms. The RSA algorithm specifies that the key length is the smallest number of bits needed to represent the value of the key modulus, n, using binary. Round up the number of bits to a factor of eight so that you can express the key using bytes; for example, consider a modulus represented by 509 bits to be a 512-bit key. Common lengths for keys are 512 and 1024-bits, but a 1024-bit asymmetric key does not provide 16 times more resistance to attack than a 64-bit symmetric key. Table 15-1 lists the equivalent asymmetric and symmetric key lengths accepted as providing equivalent resistance to brute force attacks (where Eve obtains the value of the private/secret key by testing all of the possible key values).
Most asymmetric algorithms rely on some form mathematical task that is difficult or time-consuming to perform. Cryptographers consider the RSA algorithm secure because it is hard to find the factors of a large number; given our example value for n (713), it would take some time to establish that the factors are the values you selected for p (23) and q (31). The longer the value of n, the longer it takes to determine the factors; bear in mind that the example value of n has only five digits, whereas the numbers that you would use for a secure key pair will be significantly longer.
Therefore, the essence of security for RSA is that given only the public key e and n, it takes a lot of computation to discover the private key d. Once you know the factors p and q, it is relatively easy to calculate d and decrypt ciphertext. Bob makes it difficult for Eve to decrypt his messages by keeping secret the values of d, p and q.
The main risk with asymmetric algorithms is that someone may discover a technique to solve the mathematical problems quickly, undermining the security of the algorithm. New techniques to factor large numbers might make it a simple process to decrypt messages or to deduce the value of the private key from the public key, and this would render the RSA algorithm (and any others that rely on the same mathematical problem) insecure.
15.1.3 Creating the Encrypted Data
We have already explained how the encryption and decryption functions are at the heart of an asymmetric algorithm, and the way in which we use these functions is similar to the techniques we discussed in Chapter 14. The protocol for encrypting data using an asymmetric algorithm is as follows:
The length of the public key determines the size of the block to which we break the plaintext. Each algorithm specifies a rule for working out how many bytes of data should be in each block, and for the RSA protocol, we subtract 1 from the key length (in bits) and divide the result by 8. The integral part of the result (the part of the number to the left of the decimal point) tells you how many bytes of data should be in each block of plaintext passed that is to the encryption function. You can work out how many bytes should be in a plaintext block for a 1024-bit key, as follows:
The integral value of the result is 127, meaning that when using the RSA algorithm with a 1024-bit public key we should break the plaintext into blocks of 127 bytes. The small key that you generated to demonstrate the key generation protocol is 16 bits long (713 is 1011001001 in binary and the bit length is therefore rounded up to be 16 bits) and with a 16-bit key, we must use 1-byte blocks (the integral part of (10 - 1)/8 = 1.875 is 1).
Encrypt each data block by interpreting it as an integer value and compute a ciphertext block using the RSA encryption function shown below (m is the integer value of the plaintext block and c is the ciphertext block):
c = me mod n
Figure 15-4 demonstrates how this process works for a 24-bit key, meaning that you process 2 bytes of plaintext at a time; the figure shows how the encryption function is applied to encrypt the string ".NET" into the ciphertext "35 7B AE 05 F1 6F."
Figure 15-4. Using the RSA encryption function to encrypt the string .NET using a 24-bit public key
For reference, we created our 24-bit key using 1901 for p and 1999 for q. We chose e to be 19, giving a secret key value, d, of 2805887. Notice that the output of the cipher function is the same length as the key modulus, making the ciphertext larger than the plaintext.
Decrypting data is the reverse of the encryption protocol:
The decryption function is as follows (c is the value of the ciphertext block and m is the plaintext block):
m = cd mod n
Notice that the decryption function uses the secret key (d) and the modulus from the public key (n). Figure 15-5 demonstrates how this process works for our 24-bit key, meaning that we process 3 bytes of ciphertext at a time; the figure shows how we decrypt the ciphertext that we created in Figure 15-4.
Figure 15-5. Using the RSA decryption function
126.96.36.199 Asymmetric data padding
Asymmetric encryption algorithms rely on padding to protect against specific kinds of attack, in much the same way that symmetric algorithms rely on cipher feedback. Padding schemes also ensure that the encryption function does not have to process partial blocks of data.
Asymmetric padding schemes are a series of instructions that specify how to prepare data before encryption, and usually mix the plaintext with other data to create a ciphertext that is much larger than the original message. The .NET Framework supports two padding schemes for the RSA algorithm: Optimal Asymmetric Encryption Padding (OAEP) and PKCS #1 v1.5. OAEP is a newer scheme that provides protection from attacks to which the PKCS #1 v1.5 scheme is susceptible. You should always use OAEP, unless you need to exchange encrypted data with a legacy application that expects PKCS #1 v1.5 padding.
We do not discuss the details of either padding scheme in this book; it is important only that you understand that padding is used in conjunction with the asymmetric algorithm to further protect confidential data. | http://etutorials.org/Programming/Programming+.net+security/Part+III+.NET+Cryptography/Chapter+15.+Asymmetric+Encryption/15.1+Asymmetric+Encryption+Explained/ | 13 |
18 | Rendering (computer graphics)
Rendering is the process of generating an image from a model (or models in what collectively could be called a scene file), by means of computer programs. Also, the results of such a model can be called a rendering. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing program to produce final video output.
Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject.
Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics and software development.
In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in real time. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.
When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees.
For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this.
A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.
- shading — how the color and brightness of a surface varies with lighting
- texture-mapping — a method of applying detail to surfaces
- bump-mapping — a method of simulating small-scale bumpiness on surfaces
- fogging/participating medium — how light dims when passing through non-clear atmosphere or air
- shadows — the effect of obstructing light
- soft shadows — varying darkness caused by partially obscured light sources
- reflection — mirror-like or highly glossy reflection
- transparency (optics), transparency (graphic) or opacity — sharp transmission of light through solid objects
- translucency — highly scattered transmission of light through solid objects
- refraction — bending of light associated with transparency
- diffraction — bending, spreading and interference of light passing by an object or aperture that disrupts the ray
- indirect illumination — surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination)
- caustics (a form of indirect illumination) — reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object
- depth of field — objects appear blurry or out of focus when too far in front of or behind the object in focus
- motion blur — objects appear blurry due to high-speed motion, or the motion of the camera
- non-photorealistic rendering — rendering of scenes in an artistic style, intended to look like a painting or drawing
Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image.
Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted.
Therefore, four loose families of more-efficient light transport modelling techniques have emerged: rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects; ray casting considers the scene as observed from a specific point-of-view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts; and ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude slower. The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques.
Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.
Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels.
Scanline rendering and rasterisation
A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives.
If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task, then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one loops through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards.
Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization.
The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect.
Ray casting
|This section does not cite any references or sources. (May 2010)|
In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged.
Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two.
Raycasting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish.
Ray tracing
Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are Path Tracing, Bidirectional Path Tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects.
In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness.
Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.
In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments.
As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required.
However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing.
Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms.
The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it.
The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithim, images may exhibit convincing realism, particularly for indoor scenes.
In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model.
Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some graphic artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity—or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture.
Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame.
Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.
Sampling and filtering
One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem, any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel.
If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing.
Optimizations used by an artist when a scene is being developed
Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed.
Common optimizations for real time rendering
For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.
Academic core
The implementation of a realistic renderer always has some basic element of physical simulation or emulation — some computation which resembles or abstracts a real physical process.
The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community.
The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques.
Rendering research is concerned with both the adaptation of scientific models and their efficient application.
The rendering equation
This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.
Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' — all the movement of light — in a scene.
The bidirectional reflectance distribution function
The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows:
Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.
Geometric optics
Rendering is practically exclusively concerned with the particle aspect of light physics — known as geometric optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.
Visual perception
Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost infinite range of light brightness and color, but current displays — movie screen, computer monitor, etc. — cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping.
Rendering for movies often takes place on a network of tightly connected computers known as a render farm.
The current state of the art in 3-D image description for movie creation is the mental ray scene description language designed at mental images and the RenderMan shading language designed at Pixar. (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators).
Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include IPR and hardware rendering/shading.
Chronology of important published ideas
- 1968 Ray casting
- 1970 Scanline rendering
- 1971 Gouraud shading
- 1974 Texture mapping
- 1974 Z-buffering
- 1975 Phong shading
- 1976 Environment mapping
- 1977 Shadow volumes
- 1978 Shadow buffer
- 1978 Bump mapping
- 1980 BSP trees
- 1980 Ray tracing
- 1981 Cook shader
- 1983 MIP maps
- 1984 Octree ray tracing
- 1984 Alpha compositing
- 1984 Distributed ray tracing
- 1984 Radiosity
- 1985 Hemicube radiosity
- 1986 Light source tracing
- 1986 Rendering equation
- 1987 Reyes rendering
- 1991 Hierarchical radiosity
- 1993 Tone mapping
- 1993 Subsurface scattering
- 1995 Photon mapping
- 1997 Metropolis light transport
- 1997 Instant Radiosity
- 2002 Precomputed Radiance Transfer
See also
- 2D computer graphics
- 3D rendering
- Architectural rendering
- Global illumination
- Graphics pipeline
- High dynamic range rendering
- Image-based modeling and rendering
- Non-photorealistic rendering
- Painter's algorithm
- Raster image processor
- Ray tracing
- Software rendering
- Scanline rendering/Scanline algorithm
- Unbiased rendering
- Vector graphics
- Virtual model
- Virtual studio
- Volume rendering
- Z-buffer algorithms
Books and summaries
- Pharr, Matt; Humphreys, Greg (2004). Physically based rendering from theory to implementation. Amsterdam: Elsevier/Morgan Kaufmann. ISBN 0-12-553180-X.
- Shirley, Peter; Morley, R. Keith (2003). Realistic ray tracing (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-198-5.
- Dutré, Philip; Bekaert, Philippe; Bala, Kavita (2003). Advanced global illumination ([Online-Ausg.] ed.). Natick, Mass.: A K Peters. ISBN 1-56881-177-2.
- Akenine-Möller, Tomas; Haines, Eric (2004). Real-time rendering (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-182-9.
- Strothotte, Thomas; Schlechtweg, Stefan (2002). Non-photorealistic computer graphics modeling, rendering, and animation (2 ed.). San Francisco, CA: Morgan Kaufmann. ISBN 1-55860-787-0.
- Gooch, Bruce; Gooch, Amy (2001). Non-photorealistic rendering. Natick, Mass.: A K Peters. ISBN 1-56881-133-0.
- Jensen, Henrik Wann (2001). Realistic image synthesis using photon mapping ([Nachdr.] ed.). Natick, Mass.: AK Peters. ISBN 1-56881-147-0.
- Blinn, Jim (1996). Jim Blinn's corner : a trip down the graphics pipeline. San Francisco, Calif.: Morgan Kaufmann Publishers. ISBN 1-55860-387-5.
- Glassner, Andrew S. (2004). Principles of digital image synthesis (2 ed.). San Francisco, Calif.: Kaufmann. ISBN 1-55860-276-3.
- Cohen, Michael F.; Wallace, John R. (1998). Radiosity and realistic image synthesis (3 ed.). Boston, Mass. [u.a.]: Academic Press Professional. ISBN 0-12-178270-0.
- Foley, James D.; Van Dam; Feiner; Hughes (1990). Computer graphics : principles and practice (2 ed.). Reading, Mass.: Addison-Wesley. ISBN 0-201-12110-7.
- Andrew S. Glassner, ed. (1989). An introduction to ray tracing (3 ed.). London [u.a.]: Acad. Press. ISBN 0-12-286160-4.
- Description of the 'Radiance' system
- Relativistic Ray-Tracing: Simulating the Visual Appearance of Rapidly Moving Objects. CiteSeerX: 10.1.1.56.830.
- A brief introduction to RenderMan
- Appel, A. (1968). "Some techniques for shading machine renderings of solids". Proceedings of the Spring Joint Computer Conference 32. pp. 37–49.
- Bouknight, W. J. (1970). "A procedure for generation of three-dimensional half-tone computer graphics presentations". Communications of the ACM 13 (9): 527–536. doi:10.1145/362736.362739.
- Gouraud, H. (1971). "Continuous shading of curved surfaces". IEEE Transactions on Computers 20 (6): 623–629.
- Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces (PhD thesis). University of Utah. http://www.pixartouchbook.com/storage/catmull_thesis.pdf.
- Phong, B-T (1975). "Illumination for computer generated pictures". Communications of the ACM 18 (6): 311–316.
- Blinn, J.F.; Newell, M.E. (1976). "Texture and reflection in computer generated images". Communications of the ACM 19: 542–546. CiteSeerX: 10.1.1.87.8903.
- Crow, F.C. (1977). "Shadow algorithms for computer graphics". Computer Graphics (Proceedings of SIGGRAPH 1977) 11 (2). pp. 242–248.
- Williams, L. (1978). "Casting curved shadows on curved surfaces". Computer Graphics (Proceedings of SIGGRAPH 1978) 12 (3). pp. 270–274. CiteSeerX: 10.1.1.134.8225.
- Blinn, J.F. (1978). "Simulation of wrinkled surfaces". Computer Graphics (Proceedings of SIGGRAPH 1978) 12 (3). pp. 286–292.
- Fuchs, H.; Kedem, Z.M.; Naylor, B.F. (1980). "On visible surface generation by a priori tree structures". Computer Graphics (Proceedings of SIGGRAPH 1980) 14 (3). pp. 124–133. CiteSeerX: 10.1.1.112.4406.
- Whitted, T. (1980). "An improved illumination model for shaded display". Communications of the ACM 23 (6): 343–349. CiteSeerX: 10.1.1.114.7629.
- Cook, R.L.; Torrance, K.E. (1981). "A reflectance model for computer graphics". Computer Graphics (Proceedings of SIGGRAPH 1981) 15 (3). pp. 307–316. CiteSeerX: 10.1.1.88.7796.
- Williams, L. (1983). "Pyramidal parametrics". Computer Graphics (Proceedings of SIGGRAPH 1983) 17 (3). pp. 1–11. CiteSeerX: 10.1.1.163.6298.
- Glassner, A.S. (1984). "Space subdivision for fast ray tracing". IEEE Computer Graphics & Applications 4 (10): 15–22.
- Porter, T.; Duff, T. (1984). "Compositing digital images". Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3). pp. 253–259.
- Cook, R.L.; Porter, T.; Carpenter, L. (1984). "Distributed ray tracing". Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3). pp. 137–145.
- Goral, C.; Torrance, K.E.; Greenberg, D.P.; Battaile, B. (1984). "Modeling the interaction of light between diffuse surfaces". Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3). pp. 213–222. CiteSeerX: 10.1.1.112.356.
- Cohen, M.F.; Greenberg, D.P. (1985). "The hemi-cube: a radiosity solution for complex environments". Computer Graphics (Proceedings of SIGGRAPH 1985) 19 (3). pp. 31–40. doi:10.1145/325165.325171.
- Arvo, J. (1986). "Backward ray tracing". SIGGRAPH 1986 Developments in Ray Tracing course notes. CiteSeerX: 10.1.1.31.581.
- Kajiya, J. (1986). "The rendering equation". Computer Graphics (Proceedings of SIGGRAPH 1986) 20 (4). pp. 143–150. CiteSeerX: 10.1.1.63.1402.
- Cook, R.L.; Carpenter, L.; Catmull, E. (1987). "The Reyes image rendering architecture". Computer Graphics (Proceedings of SIGGRAPH 1987) 21 (4). pp. 95–102.
- Hanrahan, P.; Salzman, D.; Aupperle, L. (1991). "A rapid hierarchical radiosity algorithm". Computer Graphics (Proceedings of SIGGRAPH 1991) 25 (4). pp. 197–206. CiteSeerX: 10.1.1.93.5694.
- Tumblin, J.; Rushmeier, H.E. (1993). "Tone reproduction for realistic computer generated images". IEEE Computer Graphics & Applications 13 (6): 42–48.
- Hanrahan, P.; Krueger, W. (1993). "Reflection from layered surfaces due to subsurface scattering". Computer Graphics (Proceedings of SIGGRAPH 1993) 27. pp. 165–174. CiteSeerX: 10.1.1.57.9761.
- Jensen, H.W.; Christensen, N.J. (1995). "Photon maps in bidirectional monte carlo ray tracing of complex objects". Computers & Graphics 19 (2): 215–224. CiteSeerX: 10.1.1.97.2724.
- Veach, E.; Guibas, L. (1997). "Metropolis light transport". Computer Graphics (Proceedings of SIGGRAPH 1997) 16. pp. 65–76. CiteSeerX: 10.1.1.88.944.
- Keller, A. (1997). "Instant Radiosity". Computer Graphics (Proceedings of SIGGRAPH 1997) 24. pp. 49–56. CiteSeerX: 10.1.1.15.240.
- Sloan, P.; Kautz, J.; Snyder, J. (2002). "Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments". Computer Graphics (Proceedings of SIGGRAPH 2002) 29. pp. 527–536.
|Look up renderer in Wiktionary, the free dictionary.|
- SIGGRAPH The ACMs special interest group in graphics — the largest academic and professional association and conference.
- http://www.cs.brown.edu/~tor/ List of links to (recent) siggraph papers (and some others) on the web. | http://en.wikipedia.org/wiki/Rendering_(computer_graphics) | 13 |
21 | In the introduction, we gave an informal definition of an algorithm as "a set of instructions for solving a problem" and we illustrated this definition with a recipe, directions to a friend's house, and instructions for changing the oil in a car engine. You also created your own algorithm for putting letters and numbers in order. While these simple algorithms are fine for us, they are much too ambiguous for a computer. In order for an algorithm to be applicable to a computer, it must have certain characteristics. We will specify these characteristics in our formal definition of an algorithm.
|An algorithm is a well-ordered collection of unambiguous and effectively computable operations that when executed produces a result and halts in a finite amount of time [Schneider and Gersting 1995].|
With this definition, we can identify five important characteristics of algorithms.
- Algorithms are well-ordered.
- Algorithms have unambiguous operations.
- Algorithms have effectively computable operations.
- Algorithms produce a result.
- Algorithms halt in a finite amount of time.
These characteristics need a little more explanation, so we will look at each one in detail.
Algorithms are well-ordered
Since an algorithm is a collection of operations or instructions, we must know the correct order in which to execute the instructions. If the order is unclear, we may perform the wrong instruction or we may be uncertain which instruction should be performed next. This characteristic is especially important for computers. A computer can only execute an algorithm if it knows the exact order of steps to perform.
Algorithms have unambiguous operations
Each operation in an algorithm must be sufficiently clear so that it does not need to be simplified. Given a list of numbers, you can easily order them from largest to smallest with the simple instruction "Sort these numbers." A computer, however, needs more detail to sort numbers. It must be told to search for the smallest number, how to find the smallest number, how to compare numbers together, etc. The operation "Sort these numbers" is ambiguous to a computer because the computer has no basic operations for sorting. Basic operations used for writing algorithms are known as primitive operations or primitives. When an algorithm is written in computer primitives, then the algorithm is unambiguous and the computer can execute it.
Algorithms have effectively computable operations
Each operation in an algorithm must be doable, that is, the operation must be something that is possible to do. Suppose you were given an algorithm for planting a garden where the first step instructed you to remove all large stones from the soil. This instruction may not be doable if there is a four ton rock buried just below ground level. For computers, many mathematical operations such as division by zero or finding the square root of a negative number are also impossible. These operations are not effectively computable so they cannot be used in writing algorithms.
Algorithms produce a result
In our simple definition of an algorithm, we stated that an algorithm is a set of instructions for solving a problem. Unless an algorithm produces some result, we can never be certain whether our solution is correct. Have you ever given a command to a computer and discovered that nothing changed? What was your response? You probably thought that the computer was malfunctioning because your command did not produce any type of result. Without some visible change, you have no way of determining the effect of your command. The same is true with algorithms. Only algorithms which produce results can be verified as either right or wrong.
Algorithms halt in a finite amount of time
Algorithms should be composed of a finite number of operations and they should complete their execution in a finite amount of time. Suppose we wanted to write an algorithm to print all the integers greater than 1. Our steps might look something like this:
- Print the number 2.
- Print the number 3.
- Print the number 4.
While our algorithm seems to be pretty clear, we have two problems. First, the algorithm must have an infinite number of steps because there are an infinite number of integers greater than one. Second, the algorithm will run forever trying to count to infinity. These problems violate our definition that an algorithm must halt in a finite amount of time. Every algorithm must reach some operation that tells it to stop.
- Schneider, M. and J. Gersting (1995), An Invitation to Computer Science, West Publishing Company, New York, NY, p. 9. | http://courses.cs.vt.edu/csonline/Algorithms/Lessons/DefinitionOfAlgorithm/Lesson.html | 13 |
81 | Objectives 2 When you complete this lesson, you will be able to: Describe a standard-form categorical syllogism Recognize the terms of the syllogism Identify the mood and figure of a syllogism Use the Venn diagram technique for testing syllogisms List and describe the syllogistic rules and syllogistic fallacies List the fifteen valid forms of the categorical syllogism
Standard-Form Categorical Syllogisms 3 Syllogism Any deductive argument in which a conclusion is inferred from two premises Categorical syllogism Deductive argument consisting of three categorical propositions that together contain exactly three terms, each of which occurs in exactly two of the constituent propositions
Standard-Form Categorical Syllogisms, continued 4 Example No heroes are cowards. Some soldiers are cowards. Therefore some soldiers are not heroes. Standard-form categorical syllogism Premises and conclusion are all standard-form categorical propositions Propositions are arranged in a specific standard order
Terms of the Syllogism 5 To identify the terms by name, look at the conclusion “Some soldiers are not heroes.” Major term Term that occurs as the predicate (heroes) Minor term Term that occurs as the subject (soldiers) Middle term Never appears in the conclusion (cowards)
Terms of the Syllogism, continued 6 Major premise Contains the major term (heroes) “No heroes are cowards” Minor premise Contains the minor term (soldiers) “Some soldiers are cowards” Order of standard form The major premise is stated first The minor premise is stated second The conclusion is stated last
Mood of the Syllogism 7 Determined by the types of categorical propositions contained in the argument No heroes are cowards (E proposition) Some soldiers are cowards (I proposition) Some soldiers are not heroes (O proposition) Mood is EIO 64 possible moods
The Figure of the Syllogism 8 Determined by the position of the middle term Types First figure Middle term is the subject term of the major premise and the predicate term of the minor premise Second figure Middle term is the predicate term of both premises Third figure Middle term is the subject of both premises Fourth figure Middle term is the predicate term of the major premise and the subject of the minor premise
The Figure of the Syllogism, continued 9 M – P S – M ∴ S – P P – M S – M ∴ S – P M – P M – S ∴ S – P P – M M – S ∴ S – P First Figure Second Figure Third Figure Fourth Figure
The Figure of the Syllogism, continued 10 Example No heroes are cowards. Some soldiers are cowards. Therefore some soldiers are not heroes. Middle term (cowards) appears as predicate in both premises (second figure) The syllogism is EIO-2
The Formal Nature of Syllogistic Argument 11 A valid syllogism is valid by virtue of its form alone AAA-1 syllogisms are always valid All M is P. All S is M. Therefore all S is P. Valid regardless of subject matter All Greeks are humans. All Athenians are Greeks. Therefore all Athenians are humans.
Exercises 12 No nuclear-powered submarines are commercial vessels, so no warships are commercial vessels, since all nuclear-powered submarines are warships. Solution Step 1. The conclusion is “No warships are commercial vessels”. Step 2. “Commercial vessels” is the predicate term of this conclusion, and is therefore the major terms of the syllogism. Step 3. The major premise, the premise that contains this term, is “No nuclear-powered submarines are commercial vessels”. Step 4. The remaining premise, “All nuclear-powered submarines are warships”, is indeed the major premise, since it does contain the subject term of the conclusion, “warships”. Step 5. In standard form this syllogism is written thus: No nuclear-powered submarines are commercial vessels. All nuclear-powered submarines are warships. Therefore no warships are commercial vessels. Step 6. The three propositions in this syllogism are, in order, E, A and E. The middle term “nuclear-powered submarines,” is the subject term of both premises, so the syllogism is in the third figure. The mood and figure of the syllogism therefore are EAE-3.
Exercises - Answer 13 Some objects of worship are fir trees. All fir trees are evergreens. Therefore some evergreens are objects of worship. IAI-4.
Exercises - Answer 14 Some artificial satellites are not American inventions. All artificial satellites are important scientific achievements. Therefore some important scientific achievements are not American inventions. OAO-3.
Group Exercises - Answer 15 #4 All certified public accounts are people of good business sense. No television stars are certified public accountants. Therefore no television stars are people of good business sense. AEE-1.
Group Exercises - Answers 16 #6 No delicate mechanisms are suitable toys for children. All CD players are delicate mechanisms. Therefore no CD players are suitable toys for children. EAE-1.
Group Exercises - Answers 17 #7 Some juvenile delinquents are products of broken homes. All juvenile delinquents are maladjusted individuals. Therefore some maladjusted individuals are products of broken homes. IAI-3.
18 P S SPM SPM SPM SPM SPM SPM SPM SPM M Venn Diagram Technique for Testing Syllogisms If S stands for Swede, P for peasant, and M for musician, then SPM represents all Swedes who are not peasants or musicians SPM represents all Swedish peasants who are not musicians, etc.
19 P S M P S M Venn Diagram Technique for Testing Syllogisms, continued “All M is P” Add “All S is M” Conclusion“All S is P” confirmed
Venn Diagram Technique for Testing Syllogisms, continued 20 Invalid argument All dogs are mammals. All cats are mammals. Therefore all cats are dogs. Dogs Cats Cats that are not dogs Dogs that are not cats Mammals
Exercises pg. 232-233 21 #1 All business executives are active opponents of increased corporation taxes, for all active opponents of increased corporation taxes are members of the chamber of commerce, and all members of the chamber of commerce are business executives. One possible refuting analogy is this: All bipeds are astronauts, All astronauts are humans Therefore all humans are bipeds.
Group Exercises pg. 232-233 22 Do numbers 3, 4, 5 and 7
23 Diagram the universal premise first if the other premise is particular All artists are egotists. Some artists are paupers. Therefore some paupers are egotists. Egotists Paupers x Artists Venn Diagram Technique for Testing Syllogisms, continued
Venn Diagram Technique for Testing Syllogisms, continued 24 Example All great scientists are college graduates. Some professional athletes are college graduates. Therefore some professional athletes are great scientists. Greatscientists Professionalathletes x Collegegraduates
Venn Diagram Technique for Testing Syllogisms, continued 25 Label the circles of a three-circle Venn diagram with the syllogism’s three terms Diagram both premises, starting with the universal premise Inspect the diagram to see whether the diagram of the premises contains a diagram of the conclusion
Group Exercises 26 Do 2,3,4 and 6
Group Exercises #2 27
Group Exercises #3 28
Group Exercises #4 29
Group Exercises #6 30
Syllogistic Rules and Syllogistic Fallacies 31 Rule 1. Avoid four terms Syllogism must contain exactly three terms, each of which is used in the same sense throughout the argument Fallacy of four terms Power tends to corrupt Knowledge is power Knowledge tends to corrupt Justification: This syllogism appears to have only three terms, but there are really four since one of them, the middle term “power” is used in different senses in the two premises. To reveal the argument’s invalidity we need only note that the word “power” in the first premise means “ the possession of control or command over people,” whereas the word “power” in the second premise means “the ability to control things.
Syllogistic Rules and Syllogistic Fallacies, continued 32 Rule 2. Distribute the middle term in at least one premise If the middle term is not distributed in at least one premise, the connection required by the conclusion cannot be made Fallacy of the undistributed middle All sharks are fish All salmon are fish All salmon are sharks Justification: The middle term is what connects the major and the minor term. If the middle term is never distributed, then the major and minor terms might be related to different parts of the M class, thus giving no common ground to relate S and P.
Syllogistic Rules and Syllogistic Fallacies, continued 33 Rule 3. Any term distributed in the conclusion must be distributed in the premises When the conclusion distributes a term that was undistributed in the premises, it says more about that term than the premises did Fallacy of illicit process All tigers are mammals All mammals are animals All animals are tigers Worth Diagramming
Syllogistic Rules and Syllogistic Fallacies, continued 34 Rule 4. Avoid two negative premises Two premises asserting exclusion cannot provide the linkage that the conclusion asserts Fallacy of exclusive premises No fish are mammals Some dogs are not fish Some dogs are not mammals If the premises are both negative, then the relationship between S and P is denied. The conclusion cannot, therefore, say anything in a positive fashion. That information goes beyond what is contained in the premises.
Syllogistic Rules and Syllogistic Fallacies, continued 35 Rule 5. If either premise is negative, the conclusion must be negative Class inclusion can only be stated by affirmative propositions Fallacy of drawing an affirmative conclusion from a negative premise All crows are birds Some wolves are not crows Some wolves are birds
Syllogistic Rules and Syllogistic Fallacies, continued 36 Rule 6. From two universal premises no particular conclusion may be drawn Universal propositions have no existential import Particular propositions have existential import Cannot draw a conclusion with existential import from premises that do not have existential import Existential fallacy All mammals are animals All tigers are mammals Some tigers are animals
Exposition of the 15 Valid Forms of the Categorical Syllogism 37 Mood (64 possible) Figure (4 possible) Logical form ( 64 x 4 = 256) Out of 256 forms, only 15 are valid Valid forms have names that contain the vowels of the mood EAE-1 is Celarent EAE-2 is Cesare
The 15 Valid Forms of the Categorical Syllogism 38 Valid form in the First Figure AAA-1Barbara EAE-1Celarent AII-1Darii EIO-1Ferio
The 15 Valid Forms of the Categorical Syllogism, continued 39 Valid forms in the Second Figure AEE-2Camestres EAE-2Cesare AOO-2Baroko EIO-2Festino
The 15 Valid Forms of the Categorical Syllogism, continued 40 Valid forms in the Third Figure AII-3Datisi IAI-3Disamis EIO-3Ferison OAO-3Bokardo
The 15 Valid Forms of the Categorical Syllogism, continued 41 Valid forms in the Fourth Figure AEE-4Camenes IAI-4Dimaris EIO-4Fresison
Exercises pg 253 42
Exercises pg 253 43
Summary 44 Standard-form categorical syllogism Syllogism terms Mood and figure Venn diagram technique for testing syllogisms Syllogistic rules and syllogistic fallacies Valid forms of the categorical syllogism
Let LinkedIn power your SlideShare experience
Let LinkedIn power your SlideShare experience
Customize SlideShare content based on your interests
We will import your LinkedIn profile and you will be visible on SlideShare.
Keep up to date when your LinkedIn contacts post on SlideShare | http://www.slideshare.net/RoyShaff/hum-200-w7-ch6-syllog | 13 |
15 | What Is a Debate?
Basically, a debate is an argument with rules.
Debating rules will vary from one competition to another, and there are several formats for debates. Debates can involve single-member teams or teams that include several students.
Typically in a debate two teams are presented a resolution or topic that they will debate, and each team is given a set period of time to prepare an argument.
Students typically don't know their debate subjects ahead of time. The goal is to come up with a good argument in a short amount of time. Students are encouraged to read about current events and controversial issues to prepare for debates.
Sometimes school teams will encourage individual team members to choose special topics and focus on them. This can give a team special strengths in certain topics.
At a debate, one team will argue in favor (pro) and the other will argue in opposition (con). Sometimes each team member speaks, and sometimes the team selects one member to speak for the entire team.
A judge or a panel of judges will assign points based on the strength of the arguments and the professionalism of the teams. One team is usually declared the winner and that team will advance to a new round.
A typical debate includes:
- Students hear the topic and take positions (pro and con)
- Teams discuss their topics and come up with statements
- Teams deliver their statements and offer main points
- Students discuss the opposition's argument and come up with rebuttals
- Rebuttals delivered
- Closing statements made
Each of these sessions is timed. For instance, teams may have only 3 minutes to come up with their rebuttal.
- By participating on a debate team, students learn the art of persuasion.
- Research has shown that participation in debates increases students' academic performance and increases their chances of earning a college degree.
- Urban debate teams are making a strong comeback.
- A school team will prepare to compete in local, regional, and national tournaments.
- Many colleges offer summer programs that teach debating skills.
- Students benefit from preparing for debates by honing their research skills.
- Students also benefit from the experience of speaking in public.
- Students can start a debate team in their own schools. If you are interested, you should do some research to find out how to start a club in your school. | http://homeworktips.about.com/od/speechclass/a/debate.htm | 13 |
15 | In Your Opinion:
Are Athletes Heroes?
Students participate in a classroom debate about athletes as heroes.
Students define the words debate, pro, and con. Students clearly express oral and/or written opinions about whether people should view athletes as heroes.
hero, athlete, debate, pro, con, issue
- teacher-selected and student-researched articles and commentaries about various athletes from print and/or online sources
- box or paper bag
- small slips of paper with the words pro and con written on them
Before teaching the lesson: Write the words pro and con on slips of paper. Put the slips into a box or paper bag.
Complete this activity over several days.
- Discuss the meaning of the word debate.Discuss examples of debate topics.
- Ask the following questions: In your opinion, what is a hero? Are athletes heroes? (If yes) What qualities make an athlete a hero? (If no) Why not?
- Tell students that the class will hold a debate on this topic. Explain general debating guidelines from teacher-selected resources or from An Introduction to the Debating Process and Debate Rules [scroll down the page]. Discuss the meanings of the words pro and con.
Find many more resources in the Education World article, It's Up for Debate and its accompanying Resources for Classroom Debates.
- Organize students into small groups or teams. Present the box or bag containing the slips of paper with the words pro and con.
- Have a student from each team pick a slip of paper from the bag or box to determine which side of the debate they will cover.
- Tell students that each group must research print or, if Internet access is available, online sources to support their side of the debate. Give students a few days to complete their research, write their position papers, and rehearse for the debate.
Hold the classroom debate.
Variation: Invite other classes in your grade to the debate. Let the visitors pose questions to the teams.
Extension: Ask students if their opinions about athletes as heroes changed as a result of participating in the debate.
Evaluate students' abilities to work in teams, position papers, and participation in the debate.
Lesson Plan Source | http://www.educationworld.com/a_lesson/00-2/lp2290.shtml | 13 |
19 | Places mentioned on this page
About this page
This page is locked. Want to contribute to this page? Contact bgill
The Pink Triangle
The pink triangle has become one of the symbols of the modern gay rights movement, but it originated in Nazi concentration camps during World War II. In many camps, prisoners worebadges. These badges were colored based upon the reason for imprisonment. In one common system, men convicted for sexual deviance, including homosexuality wore a pink triangle. The icon has been reclaimed by many in the post-Stonewall gay rights movement as a symbol of empowerment, and, by some, a symbol of rememberance to the suffering of others during a tragic time in history.
Persecution of homosexuals in Nazi Germany and the Holocaust
In the 1920s, homosexual people in Germany, particularly in Berlin, enjoyed a higher level of freedom and acceptance than anywhere else in the world. However, upon the rise of Adolf Hitler, gay men and, to a lesser extent, lesbians, were two of the numerous groups targeted by the Nazi Party and were ultimately among Holocaust victims. Beginning in 1933, gay organizations were banned, scholarly books about homosexuality, and sexuality in general, were burned, and homosexuals within the Nazi Party itself were murdered. TheGestapo compiled lists of homosexuals, who were compelled to sexually conform to the "German norm."
Between 1933–45, an estimated 100,000 men were arrested as homosexuals, of which some 50,000 were officially sentenced. Most of these men served time in regular prisons, and an estimated 5,000 to 15,000 of those sentenced were incarcerated in Nazi concentration camps. It is unclear how many of the 5,000 to 15,000 eventually perished in the camps, but leading scholar Ruediger Lautman believes that the death rate of homosexuals in concentration camps may have been as high as 60%. Homosexuals in the camps were treated in an unusually cruel manner by their captors.
After the war, the treatment of homosexuals in concentration camps went unacknowledged by most countries, and some men were even re-arrested and imprisoned based on evidence found during the Nazi years. It was not until the 1980s that governments began to acknowledge this episode, and not until 2002 that the German government apologized to the gay community. This period still provokes controversy, however. In 2005, the European Parliament adopted a resolution on the Holocaust which included the persecution of homosexuals.Purge On May 10, 1933, Nazis in Berlin burned works of Jewish authors, the library of the Institut für Sexualwissenschaft, and other works considered "un-German".
In late February 1933, as the moderating influence of Ernst Röhm weakened, the Nazi Party launched its purge of homosexual (gay, lesbian, and bisexual; then known as homophile) clubs in Berlin, outlawed sex publications, and banned organized gay groups. As a consequence, many fled Germany (e.g., Erika Mann, Richard Plaut). In March 1933, Kurt Hiller, the main organizer of Magnus Hirschfeld's Institute of Sex Research, was sent to a concentration camp.Autobiography of Pierre Seel, a gay man sent to a concentration camp by the Nazis.
On May 6, 1933, Nazi Youth of the Deutsche Studentenschaft made an organised attack on the Institute of Sex Research. A few days later the Institute's library and archives were publicly hauled out and burned in the streets of the Opernplatz. Around 20,000 books and journals, and 5,000 images, were destroyed. Also seized were the Institute's extensive lists of names and addresses of homosexuals. In the midst of the burning, Joseph Goebbelsgave a political speech to a crowd of around 40,000 people. Hitler initially protected Röhm from other elements of the Nazi Party which held his homosexuality to be a violation of the party's strong anti-gay policy. However, Hitler later changed course when he perceived Röhm to be a potential threat to his power. During the Night of the Long Knives in 1934, a purge of those whom Hitler deemed threats to his power took place. He had Röhm murdered and used Röhm's homosexuality as a justification to suppress outrage within the ranks of the SA. After solidifying his power, Hitler would include gay men among those sent to concentration camps during the Holocaust.
Himmler had initially been a supporter of Röhm, arguing that the charges of homosexuality against him were manufactured by Jews. But after the purge, Hitler elevated Himmler's status and he became very active in the suppression of homosexuality. He exclaimed, "We must exterminate these people root and branch... the homosexual must be eliminated." (Plant, 1986, p. 99).
Shortly after the purge in 1934, a special division of the Gestapo was instituted to compile lists of gay individuals. In 1936, Heinrich Himmler, Chief of the SS, created the "Reich Central Office for the Combating of Homosexuality and Abortion".
Gays were not initially treated in the same fashion as the Jews, however; Nazi Germany thought of German gay men as part of the "Master Race" and sought to force gay men into sexual and social conformity. Gay men who would or could not conform and feign a switch in sexual orientation were sent to concentration camps under the "Extermination Through Work" campaign.
More than one million gay Germans were targeted, of whom at least 100,000 were arrested and 50,000 were serving prison terms as convicted gay men. Hundreds of European gay men living under Nazi occupation were castrated under court order.
Some persecuted under these laws would not have identified themselves as gay. Such "anti-homosexual" laws were widespread throughout the western world until the 1960s and 1970s, so many gay men did not feel safe to come forward with their stories until the 1970s when many so-called "sodomy laws" were repealed.
Lesbians were not widely persecuted under Nazi anti-gay laws, as it was considered easier to persuade or force them to comply with accepted heterosexual behavior. However, they were viewed as a threat to state values.Homosexuality and the SS
According to Geoffrey J. Giles (mentioned earlier) the SS, and its leader Heinrich Himmler, were particularly concerned about homosexuality. More than any other Nazi leader, Himmler's writing and speeches denounced homosexuality. However, despite consistently condemning homosexuals and homosexual activity, Himmler was less consistent in his punishment of homosexuals. In Geoffrey Giles' article "The Denial of Homosexuality: Same-Sex Incidents in Himmler's SS", several cases are put forward where members of the Nazi SS are tried for homosexual offences. On a case by case basis, the outcomes vary widely, and Giles gives documented evidence where the judges could be swayed by evidence demonstrating the accused's "aryan-ness" or "manliness", that is, by describing him as coming from true Germanic stock and perhaps fathering children. Reasons for Himmler's leniency in some cases may derive from the difficulty in defining homosexuality, particularly in a society that glorifies the masculine ideal and brotherhood.Concentration camps
Estimates vary widely as to the number of gay men imprisoned in concentration camps during the Holocaust, ranging from 5,000 to 15,000, many of whom died. Larger numbers include those who were both Jewish and gay, or even Jewish, gay, and communist. In addition, records as to the specific reasons for internment are non-existent in many areas, making it hard to put an exact number on exactly how many gay men perished in death camps. See pink triangle.
Gay men suffered unusually cruel treatment in the concentration camps. They faced persecution not only from German soldiers but also from other prisoners, and many gay men were beaten to death. Additionally, gay men in forced labor camps routinely received more grueling and dangerous work assignments than other non-Jewish inmates, under the policy of "Extermination Through Work". SS soldiers also were known to use gay men for target practice, aiming their weapons at the pink triangles their human targets were forced to wear.
The harsh treatment can be attributed to the view of the SS guards toward gay men, as well as to the homophobic attitudes present in German society at large. The marginalization of gay men in Germany was reflected in the camps. Many died from beatings, some of them caused by other prisoners. Nazi doctors often used gay men for scientific experiments in an attempt to locate a "gay gene" to "cure" any future Aryan children who were gay.
Experiences such as these can account for the high death rate of gay men in the camps as compared to the other "anti-social groups." A study by Rüdiger Lautmann found that 60% of gay men in concentration camps died, as compared to 41% for political prisoners and 35% forJehovah's Witnesses. The study also shows that survival rates for gay men were slightly higher for internees from the middle and upper classes and for married bisexual men and those with children.Post-War One point of the Homomonument, in Amsterdam, to gay and lesbian victims of persecution, which is formed of three largepink triangles made of granite.
Homosexual concentration camp prisoners were not acknowledged as victims of Nazi persecution. Reparations and state pensions available to other groups were refused to gay men, who were still classified as criminals — the Nazi anti-gay law was not repealed until 1994, although both East and West Germany liberalized their criminal laws against adult homosexualityin the late 1960s.
"Gay Holocaust" survivors could be re-imprisoned for "repeat offences", and were kept on the modern lists of "sex offenders". Under the Allied Military Government of Germany, some homosexuals were forced to serve out their terms of imprisonment, regardless of the time spent in concentration camps.
The Nazis' anti-gay policies and their destruction of the early gay-rights movement were generally not considered suitable subject matter for Holocaust historians and educators. It was not until the 1970s and 1980s that there was some mainstream exploration of the theme, with Holocaust survivors writing their memories, plays such as Bent, and more historical research and documentaries being published about the Nazis' homophobia and their destruction of the German gay-rights movement.
Since the 1980s, some European and international cities have erected memorials to remember the thousands of homosexual people who were murdered and persecuted during the Holocaust. Major memorials can be found in Berlin, Amsterdam (Netherlands), Montevideo(Uruguay), and San Francisco. In 2002, the German government issued an official apology to the gay community.
In 2005, the European Parliament marked the 60th anniversary of the liberation of the Auschwitz camp with a minute's silence and the passage of a resolution which included the following text:"...27 January 2005, the sixtieth anniversary of the liberation of Nazi Germany's death camp at Auschwitz-Birkenau, where a combined total of up to 1.5 million Jews, Roma, Poles, Russians and prisoners of various other nationalities, and homosexuals, were murdered, is not only a major occasion for European citizens to remember and condemn the enormous horror and tragedy of the Holocaust, but also for addressing the disturbing rise in anti-Semitism, and especially anti-Semitic incidents, in Europe, and for learning anew the wider lessons about the dangers of victimising people on the basis of race, ethnic origin, religion, social classification, politics or sexual orientation...."
An account of a gay Holocaust survivor, Pierre Seel, details life for gay men during Nazi control. In his account he states that he participated in his local gay community in the town of Mulhouse. When the Nazis gained power over the town his name was on a list of local gay men ordered to the police station. He obeyed the directive to protect his family from any retaliation. Upon arriving at the police station he notes that he and other gay men were beaten. Some gay men who resisted the SS had their fingernails pulled out. Others were raped with broken rulers and had their bowels punctured, causing them to bleed profusely. After his arrest he was sent to the concentration camp at Schirmeck. There, Seel stated that during a morning roll-call], the Nazi commander announced a public execution. A man was brought out, and Seel recognized his face. It was the face of his eighteen-year-old lover from Mulhouse. Seel states that the Nazi guards then stripped the clothes of his lover, placed a metal bucket over his head, and released trained German Shepherd dogs on him, which mauled him to death.
Rudolf Brazda, believed to be the last surviving person who was sent to a Nazi concentration camp because of his homosexuality, died in France in August 2011, aged 98. Brazda was sent to Buchenwald in August 1942 and held there until its liberation by U.S. forces in 1945. Brazda, who settled in France after the war, was later awarded the Legion of Honour.Early Holocaust and genocide discourse
Arising from the dominant discourse of the Jewish suffering during the years of Nazi domination, and building on the divergence of differential victimhoods brought to light by studies of the Roma and the mentally ill, who suffered massively under the eugenics programs of the Third Reich, the idea of a “Gay Holocaust” was first explored in the early 1970s. However, extensive research on the topic was impeded by a continuation of Nazi policies on homosexuals in post-war East and West Germany and continued western notions of homophobia.
The civil rights movement, which began with Black movements in the United States as well as Women’s movements in Europe and the Americas was adopted by gay and lesbian organizations throughout the West, and yielded the first exploration of homosexuals within the context of the Holocaust. The idea of homosexuals as specific targets of Hitler’s final solution was however not salient with Zionist notions of victimhood during the Nazi regime and was also met with opposition within The United States during the conservative revival of the Reagan era and at the onset of the HIV/AIDS pandemic.
The word “genocide” was generated from of a need for new terminology in order to understand the gravity of the crimes committed by the Nazis. First coined by Raphael Limkin in 1944, the word became politically charged when The Genocide Act was enacted by the United Nations on December 9, 1948, which created an obligation for governments to respond to such atrocities in the future. The debate on the “Gay Holocaust” is therefore a highly loaded debate which would result in an international acknowledgement of state sponsored homophobia as a precursor to genocide should the proponents of the “Gay Holocaust” succeed. However the United Nations definition does not include sexual orientation (or even social and political groups) within its qualifications for the crime. Genocide by the U.N. definition is limited to national, ethnical, racial or religious groups and as this is the only accord to which nations have pledged allegiance, it stands as the dominant understanding of the term. It is, however, what Michel-Rolph Trouillot terms “an age when collective apologies are becoming increasingly common” as well as a time when the established Holocaust discourse has settled and legitimized claims of the Jewish, Roma and mentally ill victims of Nazi persecution so it would seem an appropriate time to at least bring attention to the debate of the Gay Holocaust, even if the issue is not to be settled.
A lack of research means that there is relatively little data on the dispersion of gay men throughout the camps however Heger suggests in his book The Men with The Pink Triangle that they were subjected to harsher labor than smaller targeted groups, such as the political prisoners, and furthermore suffered a much higher mortality rate. They also lacked a support network within the camps and were ostracized in the prison community. Homosexuals, like the mentally ill and many Jews and Roma, were also subjected to medical experimentation in the hopes of finding a cure to homosexuality at the camp in Buchenwald.
The conception of Jewish exclusivity in the Holocaust went unchallenged in the early years of study on the subject. It is undeniable that the Jews suffered the greatest death toll, and entire communities were obliterated in Eastern Europe and to a great extent in western countries. The notion of exclusivity however is challenged by the existence of similar forces working against different social and ethnic groups such as homosexuals and the Roma, which resulted in the victimization and systematic destruction of homosexual lives and lifestyles, as well as those of the Roma. An inclusion of social groups in a definition of genocide would further challenge the notion of the Jewish genocide as unique within the context of the Holocaust. While statistically speaking Jew suffered much more at the hands of the Nazis, Ellie Weisel’s belief that “a focus on other victims may detract from the Judaic specificity of the Holocaust” fosters a misrepresentation of history and devalues the suffering of other victims of Nazi atrocities. Simon Wiesenthal argues that “the Holocaust transcended the confines of Jewish community and that there were other victims.”] In the mid-1970s new discourses emerged that challenged the exclusivity of the Jewish genocide within the Holocaust, though not without great resistance.Changes with the civil rights movement
The civil rights movements of North America in the 1970s saw an emergence of victim claims through revision and appropriation of historical narratives. The shift from the traditionally conservative notion of history as the story of power and those who held it, social historians emerged with narratives of those who suffered and resisted these powers. African Americans created their own narrative, as firmly based on evidence as the discourses already in existence, as part of a social movement towards civil rights based on a history of victimization and racism. ]Along similar lines, the gay and lesbian movement in the United States also utilized revisionism to write the narrative that had only just garnered an audience willing to validate it.
There were two processes at work in this new discourse, revisionism and appropriation, which Arlene Stein teases out in her article “Whose Memory, Whose Victimhood?” both of which were used at different points in the movement for civil rights. The revisionist project was taken on in a variety of mediums, historical literature being only one of many. The play Bent and a limited number of memoirs, which recall The Diary of Anne Frank coincided with the appropriation of the pink triangle as a symbol of the new movement and a reminder to “never forget.” While the focus of these early revisions was not necessarily to determine the Nazi policy on homosexuals as genocidal, they began a current towards legitimizing the victimization of homosexuals under the regime, a topic that had not been addressed until the 1970s.
Historical works eventually focused on the nature and intent of Nazi policy. Heinz Heger, Gunter Grau and Richard Plant all contributed greatly to the early Holocaust discourse which emerged throughout the 1970s and early 1980s. Central to these studies was the notion that statistically speaking, homosexuals suffered greater losses than many of the smaller minorities under Nazi persecution such as theJehovah’s Witnesses and within the camps experienced harsher treatments and ostracization as well as execution at the hands of firing squads and the gas chambers.
These early revisionist discourses were joined by a popular movement of appropriation, which invoked the global memory of the Holocaust] to shed light on social disparities for homosexuals within the United States. Larry Kramer who was one of the founders of ACT UP, an HIV/AIDSactivist group that used shock tactics to bring awareness to the disease and attention to the need for funding popularized the AIDS-as-Holocaust discourse. “The slowness of government response at federal and local levels of government, the paucity of funds for research and treatment, particularly in the early days of the epidemic stems, Kramer argued, from deep-seated homophobic impulses and constituted ‘intentional genocide’.”
While the appropriation of the Holocaust discourse helped to grab the attention needed for an appropriate response to the pandemic it is highly problematic and perhaps counterproductive to the historical discourse of the time. The notion of AIDS-as-Holocaust and the accompanying notion of AIDS-as-genocide greatly oversimplify the meaning and the intention of genocide as a crime. While parallels can be drawn such as specific group experiencing disproportionate mortality resulting from a seeming neglect by the institutions designed to protect them, the central factors of intention and systematic planning are absent and the use of the word dilutes the severity of the act.
The Holocaust frame was used again in the early 1990s this time in relation to right-wing homophobic campaigns throughout the United States. The conservative response yielded a new discourse working against the “Gay Holocaust” academia which emphasized the gay and lesbian revisionism as a victimist discourse which sought sympathy and recognition as a pragmatic means of garnering special status and civil rights outside those of the moral majority. Arlene Stein identifies four central elements to the conservative reaction to the Gay Holocaust discourse, she argues that the right is attempt to dispel the notion that gays are victims, pit two traditionally liberal constituencies against one another (gays and Jews) thereby draw parallels between Jews and Christians and thereby legitimate its own status as an oppressed and morally upright group.
The victimist argument raises a central tenet as to the reasons for which the discourse of a “Gay Holocaust” has experienced so much resistance politically and popularly (in the conscious of the public). Alyson M. Cole addresses the anti-victim discourse that has emerged in western politics since the end of the 1980s. She asserts “anti-victimists transformed discussions of social obligation, compensations and remedial or restorative procedures into criticisms of the alleged propensity of self-anointed victims to engage in objectionable conduct.” Though she is clear that the anti-victimist discourse is not limited to right-wing politics, the case of the “Gay Holocaust” situates itself along these political boundaries and the anti-victim discourse is highly relevant to the debate on homosexual claims to genocide under the Third Reich. Cole also identifies a central conflict within the anti-victim discourse, which sheds light on the weakness in the conservative argument against the Gay Holocaust. While anti-victimists shun the victim and target it for ridicule as a pity-seeking subject-person while simultaneously extolling the virtues of what Cole identifies as the true victim. The true victim holds certain personal qualities, which allow for it to be beyond the ridicule given to the victimist. Propriety, responsibility, individuality and innocence are the central attributes of the true victim and in the case of the Gay Holocaust discourse, the claims made for the recognition of genocide or genocidal processes under Nazi Germany allow the claimants to be relegated to the victimist status, making their claims bogus.Post-revisionist framing of the "Gay Holocaust" Memorial "Stolperstein" for Arnold Bastian, a homosexual victim of the Nazis. It is located at Grosse Strasse 54 inFlensburg. The text reads: "Here lived Arnold Bastian, born 1908. Arrested 15 January 1944. Penetentiary at Celle. Dead on 17 February 1945 at the penetentiary inHameln.
In recent years new work has been done on the Gay Holocaust and rather than emphasizing the severity of destruction to communities or the exclusivity of the genocidal process of the Nazi regime, it focuses on the intersections of social constructions such as gender and sexuality within the context of social organization and political domination. Spurlin claims that these all functioned with one another in forming Germany’s social order and final solution to these social problems. Rather than being autonomous policies, “They were part of a much larger strategy of social disenfranchisement and the marking of enemies....” This discourse incorporates numerous disciplines including gender studies, queer studies, Holocaust studies and genocide studies to tease out the axis at which they meet in social control specifically under National Socialism in Germany.
The approach taken by Spurlin is highly effective as he cross-relates identity construction with enemy construction and analyzes the way that it functions within the institutions of social organization such as the medical institution and the camp. The study reveals that the homophobic impulse along with the anti-Semitic and other national threats seldom operated alone and that in terms of Nazi policy they were functioning on similar levels with differing opportunities to implement solutions, such as the Holocaust. This study is the most recent addition to the discourses of the “Gay Holocaust” and holds much promise in terms of generating a complete and interwoven understanding of where homosexuality factors into Nazi race policy and social organization. By re-evaluating the way in which National Socialism in Germany generated the other and how it functioned in terms of social organization, Spurlin asserts that homophobia was one of many forces that generated the Hitler’s Final Solution, along with anti-Semitism and misogyny.
Homosexuals and the Holocaust
Ben S. Austin
Around the turn of the century there was a fairly significant gay rights movement in Germany under the leadership of Magnus Hirschfeld and his organization, the Scientific Humanitarian Committee. The major goals of the movement were to educate the public and to bring about the repeal of Paragraph 175. At the close of World War I, there was a somewhat more liberal climate in Germany and the Weimar Republic, while it did not repeal the existing law, did not enforce the law with the same zeal as the First Reich. There was a proliferation of homosexual meeting places, books, articles and films and homosexuality was considerably more open and more openly discussed.
In the mid-1920's the government reacted to these developments by attempting to enforce the laws more vigorously and to pass more restrictive legislation. In 1929, after a couple of years of debate and discussion, the attempt failed by a narrow majority in the Reichstag. Homosexuals felt that a major victory had been achieved. However, in all of the discussion, a clear voice was heard from the Nazi deputies in the Assembly who voiced the conviction that it was the Jews who were leading this movement in an attempt to undermine the morality of the German people. The racial theme in their position also emerged in their argument that homosexuality has a detrimental impact on desired Aryan family size and population increase -- thus impacting German strength. Therefore, homosexuality was incompatible with racial purity. This was later to be one of Himmler's major arguments. That voice was to become very loud and clear when the Nazi Party gained control in 1933.The Roehm Affair and Persecution of Homosexuals
The leadership of the Nazi Party included at least one avowed homosexual, Ernst Roehm. He was a member of Hirschfeld's League for Human Rights and openly attended homosexual meeting places. Between 1933 and 1934, Roehm was the leader of the SA (Stormtroopers) and, before the death of Hindenberg in 1934, he was a potential challenger to Hitler's supremacy. With the Nazis' rise to power came an attack from Germany's political left. Attempts were made to discredit Hitler and the Nazis. One of their arguments was the charge of homosexuality in the Nazi ranks. Hitler's old friend Roehm was one of their main targets.
Interestingly, one of Roehm's principal defenders was Heinrich Himmler. He articulated the belief that accusations against Roehm were the work of Jews who feared the SS and were trying to discredit the movement. The mood of the party, and of Himmler, changed, however, when Hitler decided in 1934 that Roehm was a threat to his authority. Specifically, Hitler feared that Roehm was attempting to turn the SA (at this time, over 2 million strong) into a militia and was planning a military challenge to Hitler. While there is no evidence that such a plan existed, Hitler ordered a purge. On June 30, 1934, Roehm, many of his supporters, and over 1,000 of Hitler's political and personal enemies, were murdered in the famous “Night of the Long Knives.” While the purge was politically motivated, the justification given for it was the homosexuality of Roehm and several of his associates in the SS command.
Himmler, who had once defended Roehm, assumed leadership of the SS and, in the process, also assumed the role of ridding the movement and Germany of homosexuals. In the wake of the Roehm execution, Hitler ordered the registration of homosexuals and theGestapo was charged with the responsibility of creating dossiers on homosexuals and other “asocials” in the Third Reich.
The following year, in 1935, the Reichstag amended Paragraph 175 of the Criminal Code to close what were seen as loopholes in the current law. The new law had three parts:
Paragraph 175: A male who commits a sex offense with another male or allows himself to be used by another male for a sex offense shall be punished with imprisonment.
Where a party was not yet twenty-one years of age at the time of the act, the court may in especially minor cases refrain from punishment.
Paragraph 175a: Penal servitude up to 10 years or, where there are mitigating circumstances, imprisonment of not less than three months shall apply to: (1) a male who, with violence or the threat of violence to body and soul or life, compels another male to commit a sex offense with him or to allow himself to be abused for a sex offense; (2) a male who, by abusing a relationship of dependence based upon service, employment or subordination, induces another male to commit a sex offense with him or to allow himself to be abused for a sex offense; (3) a male over 21 years of age who seduces a male person under twenty-one years to commit a sex offense with him or to allow himself to be abused for a sex offense; (4) a male who publicly commits a sex offense with males or allows himself to be abused by males for a sex offense or offers himself for the same.
Paragraph 175b: An unnatural sex act committed by humans with animals is punishable by imprisonment; the loss of civil rights might also be imposed.
Paragraph 174 of the penal code forbad incest and other sexual offenses with dependents, while paragraph 176 outlawed pedophilia. Persons convicted under these laws also wore the pink triangle.
The Nazi's passed other laws that targeted sex offenders. In 1933, they enacted the Law Against Dangerous Habitual Criminals and Measures for Protection and Recovery. This law gave German judges the power to order compulsory castrations in cases involving rape, defilement, illicit sex acts with children (Paragraph 176), coercion to commit sex offenses (paragraph 177), the committing of indecent acts in public including homosexual acts (paragraph 183), murder or manslaughter of a victim (paragraphs 223-226), if they were committed to arouse or gratify the sex drive, or homosexual acts with boys under 14. The Amendment to the Law for the Prevention of Offspring with Hereditary Diseases dated June 26, 1935, allowed castration indicated by reason of crime for men convicted under paragraph 175 if the men consented. These new laws defined homosexuals as "asocials" who were a threat to the Reich and the moral purity of Germany. The punishment for "chronic homosexuals" was incarceration in a concentration camp. A May 20, 1939 memo from Himmler allows concentration camp prisoners to be blackmailed into castration.
In effect, the definition of "public morality" was made a police matter. In 1936, Himmler created the Reich Central Office for the Combating of Homosexuality and Abortion and appointed Joseph Meisinger to head up the office. The results of these administrative changes is very apparent. According to Burleigh and Wipperman (1991:192):
Himmler's Speech to the SS Group Commanders, February 18, 1937
...While in 1934 766 males were convicted and imprisoned, in 1936 the figure exceeded 4,000, and in 1938 8,000. Moreover, from 1937 onwards many of those involved were sent to concentration camps after they had served their "regular" prison sentence...
In a particularly convoluted piece of Nazi logic, Heinrich Himmler put homosexuality under the ideology of racial theory and racial purity. Drawing upon the fact that Germany had lost over 2 million men during WWI, thus creating a serious imbalance in the reproductive sex ratio, he added an estimated 2 million homosexuals who had doubled the imbalance. Never mind the fact that they were not going to procreate anyway, Himmler proceeded to use those facts as a rationale for bringing homosexuality under Nazi racial policy. Portions of that speech follow:
If you further take into account the facts that I have not yet mentioned, namely that with a static number of women, we have two million men too few on account of those who fell in the war, then you can well imagine how this imbalance of two million homosexuals and two million war dead, or in other words a lack of about four million men capable of having sex, has upset the sexual balance sheet of Germany, and will result in a catastrophe.
I would like to develop a couple of ideas for you on the question of homosexuality. There are those homosexuals who take the view: what I do is my business, a purely private matter. However, all things which take place in the sexual sphere are not the private affair of the individual, but signify the life and death of the nation, signify world power...
After likening the homosexual who was killed and thrown into a peat bog to the weeding process in a garden, Himmler continued his tirade:
...In the SS, today, we still have about one case of homosexuality a month. In a whole year, about eight to ten cases occur in the entire SS. I have now decided upon the following: in each case, these people will naturally be publicly degraded, expelled, and handed over to the courts. Following completion of the punishment imposed by the court, they will be sent, by my order, to a concentration camp, and they will be shot in the concentration camp, while attempting to escape. I will make that known by order to the unit to which the person so infected belonged. Thereby, I hope finally to have done with persons of this type in the SS, and the increasingly healthy blood which we are cultivating for Germany, will be kept pure.
Over the next two years, an intricate network of informants was developed. School children were encouraged to inform on teachers they suspected of homosexuality, employers on employees and vice versa. Homosexuals who were arrested were used to create lists of homosexuals or suspected homosexuals. The clear intention was to identify every homosexual in Germany and move them to concentration camps.
Himmler clearly recognized that these strategies would not solve the sexual imbalance problem in Germany. Instead, the purpose of the plan was, in Himmler's own words, to "identify" the homosexual and remove them from society. He still needed a rationale for exterminating them. As in the case with the Gypsies, Himmler fell back on “medical science” as the solution to the homosexuality problem.The Vaernet Cure
Several suggested solutions to the problem were taken under advisement by the Gestapo. One of the most attractive was that advanced by a Danish SS doctor, Vaernet, who claimed to have developed a hormonal implant which would cure homosexuality. The SS gave him a research position, necessary funds, laboratory facilities and the concentration camp population as experimental subjects. The testosterone implants were experimentally placed in homosexual inmates and their progress monitored. Some of the reports suggest improvement; however, for many others there was no significant change. We can only speculate as to the fate of those who, by this process, were determined to be "chronic" and "incurable" homosexuals. \The Extermination of Homosexuals in the Death Camps
Precise figures on the number of homosexuals exterminated in Nazi Death camps have never been established. Estimates range from 10,000 to 15,000. It does not appear that the Nazis ever set it as their goal to completely eradicate all homosexuals. Rather, it seems, the official policy was to either re-educate those homosexuals who were "behaviorally" and only occasionally homosexual and to block those who were "incurable" homosexuals through castration, extreme intimidation, or both. For a fascinating empirical sociological examination of this idea, the reader is referred to the work of Reudiger Lautmann. Nor does it appear that their efforts extended beyond Germany itself to the occupied territories.
However, the numerous testimonies by homosexuals who survived the camp experience suggest that the SS had a much less tolerant view. Those who wore the pink triangle were brutally treated by camp guards and other categories of inmates, particularly those who wore the green (criminals), red (political criminals) and black (asocials) triangles. The following testimony by survivor, Heinz Heger, provides a dramatic illustration:
Extracted from: Heger, Heinz. The men with the Pink Triangles. Alyson Publications 1980:34-37.
"... Our block was only occupied by homosexuals, with about 250 men in each wing. We could only sleep in our night-shirts, and had to keep our hands outside the blankets, for: 'You queer arse-holes aren't going to start wanking here!'
"The windows of had a centimetre of ice on them. Anyone found with his underclothes on in bed, or his hand under his blanket -- there were checks almost every night -- was taken outside and had serveral bowls of water poured over him before being left standing outside for a good hour. Only a few people survived this treatment. The least result was bronchitis, and it was rare for any gay person taken into the sick-bay to come out alive. We who wore the pink triangle were prioritised for medical experiments, and these generally ended in death. For my part, therefore, I took every care I could not to offend against the regulations.
"Our block senior and his aides were 'greens,' i.e. criminals. They look it, and behaved like it too. Brutal and merciless towards us 'queers', and concerned only with their own privelege and advantage, they were as much feared by us as the SS.
"In Sachsenhausen, at least, a homosexual was never permitted to have any position of responsibility. Nor could we even speak with prisoners from other blocks, with a different coloured badge; we were told we might try to seduce them. And yet, homosexuality was much more rife in the other blocks, where there were no men with the pink triangle, than it was in our own.
"We were also forbidden to approach nearer than five metres of the other blocks. Anyone caught doing so was whipped on the 'horse', and was sure of at least 15 to 20 strokes. Other categories of prisoner were similarly forbidden to enter our block. We were to remain isolated as the damnedest of the damned, the camp's 'shitty queers', condemned to liquidation and helpless prey to all torments inflicted by the SS and Capos.
"The day regularly began at 6 a.m., or 5 a.m. in the summer, and in just half an hour we had to be washed, dressed and have our beds made up in military style. If you still had time, you could have breakfast, which meant a hurried slurping down the thin flour soup, hot or luke-warm, and eating your piece of bread. Then we had to form up in eights on the parade-ground for morning roll-call. Work followed, in winter from 7.30 a.m. to 5 p.m., and in summer from 7 a.m. to 8 p.m., with a half hour break at the workplace. After work, straight back to camp and immediate parade for evening roll-call.
"Each block marched in formation to the parade-ground and had its permanent position there. The morning parade was not so drawn-out as the much feared evening roll-call, for only the block numbers were counted, which took about an hour, and then the command was given for work detachments to form up.
"At every parade, those that had just died had to be present, i.e. they were laid out at the end of each block and counted as well. Only after the parade, and having been tallied by the report officer, were they taken to the mortuary and subsequently burned.
"Disabled prisoners also had to be present for parade. Time and again we helped or carried comrades to the parade-ground who had been beaten by the SS only hours before. Or we had to bring along fellow-prisoners who were half-frozen or feverish, so as to have our numbers complete. Any man missing from our block meant many blows and thus many deaths.
"We new arrivals were now assigned to our work, which was to keep the area around the block clean. That, at least, was what we were told by the NCO in charge. In reality, the purpose was to break the very last spark of independent spirit that might possibly remain in the new prisoners, by senseless yet heavy labour, and to destroy the little human dignity that we still retained. This work continued til a new batch of pink-triangle prisoners were delivered to our block and we were replaced.
"Our work, then, was as follows. In the morning we had to cart the snow outside our block from the left side of the road to the right side. In the afternoon we had to cart the same snow back from the right side to the left. We didn't have barrows and shovels to perform this work either, that would have been far too simple for us 'queers'. No, our SS masters had thought up something much better.
"We had to put our coats with the buttoned side backward, and take the snow away in the container this provided We had to shovel up the snow with our hands — our bare hands, as we didn't have any gloves. We worked in teams of two. Twenty turns at shovelling up the snow with our hands, then twenty turns at carrying it away. And so, right throught the evening, and all at the double!
"This mental and bodily torment lasted six days, until at last new pink-triangle prisoners were delivered to our block and took over for us. Our hands were cracked all over and half frozen off, and we had become dumb and indifferent slaves of the SS.
"I learned from prisoners who had already been in our block a good while that in summer similar work was done with earth and sand. "Above the gate of the prison camp, however, the 'meaningful' Nazi slogan was written in big capitals: 'Freedom through work!'"
Furthermore, homosexuals were at another important disadvantage. They lacked the group support within the camp to maintain morale. As Lautmann observes:
The prisoners with the pink triangle had certainly shown "precamp" qualities of survival, but they did not get a chance to apply these qualities in the camp. Because their subculture and organizations had been wantonly destroyed, no group solidarity developed inside the camp...Since every contact outside was regarded as suspicious, homosexuals did not even dare speak to one another inside (as numerous survivors have reported in interviews).
Death rates for homosexuals were much higher, perhaps three to four times higher, than for other non-Jewish categories of prisoners. While their overall numbers are small, their fate in the camps more nearly approximates that of Jews than any of the other categories, except, perhaps, Gypsies. And, homosexuals did not survive for very long. Of those who were exterminated, most were exterminated within the first few months of the camp experience.Conclusion
One last issue deserves brief attention. The Nuremberg War Crimes Trials, held in 1945, did not address the plight of homosexuals with the same seriousness accorded other victims of the Holocaust. Burleigh and Wipperman (1991:183) suggest that this may reflect the fact that after the war homosexuality was still a crime under German law and there still existed widespread homophobia. In fact, the Reich laws against homosexuality (i.e., the Nazi interpretations oBf Paragraph 175 of the Reich Criminal Code) were not repealed in Germany `xuntil 1969. As a consequence, homosexual survivors of the camp experience were still reticent to press their case before the courts since they could still be prosecuted under existing laws.
However, the contemporary Gay Rights Movement, both in the United States and in Europe, has led to a re-opening of the plight of homosexuals in Nazi Germany. The unparalleled treatment of homosexuals under the Nazi regime raises the same questions raised by the Holocaust itself: How could it happen? Can it happen again? And, how can its recurrence be prevented?
'I Had Always Been Blessed with Good Fortune'
For decades, the subject of the Nazi persecution of homosexuals during the Third Reich was swept under the rug and reparations were almost never paid. Rudolf Brazda, who may be the last living gay man to have survived the terror, shares his life story in a newly published book.
His body emaciated and his toothless mouth hanging open, Rudolf Brazda is skin and bones. Then comes his scream -- a loud lament that becomes a moan and then tapers off. Brazda is lying in his hospital bed, waiting at death's door. He alternately shouts, whispers or goes silent. Minutes creep by, then a quarter of an hour, then half an hour. Sometimes he'll say something and then go quiet again.
When he does speak, he utters lines like, "I'm too old to live," "I'm waiting for time to pass by," "I just don't want to do this anymore!" or "Everything's shit."
The door to Room 8411 opens. Worried about the condition of her elderly patient, a nurse at the Emile Muller Hospital in the Alsatian city of Mulhouse has come in to check on Brazda. She doesn't speak any German and he barely speaks any French, so they communicate by making faces at each other. The nurse raises a questioning eyebrow at her patient and he shakes his head. Then he winks at her and smiles. It's nothing serious.
"You comédien," she says, playfully cursing at him in French. Ever the comedian and charmer, Brazda, grins back at her. It is exactly these traits that helped him to cheat death when he was a prisoner at the Buchenwald concentration camp.
Ninety-eight-year-old Brazda is believed to be the last gay man alive who can recount what it was like to live as a homosexual man during the Third Reich. He's a man who can also remember the persecution, the legal proceedings against gays, the punishment and murder of his friends. But he also remembers what it was like to have sex in a concentration camp and what it felt like to be liberated.
6,000 Gay Men Murdered Under Hitler
Brazda kept his past to himself for many years. For the last five decades, he worked as a roofer, built his own house and lived together with his life partner in France's Alsace region near the German border. A few years ago, he buried his partner there, too. Thoughts about the Nazis weren't much of an issue for him over the past 50 years. But in 2008, at the age of 95, Brazda was confronted by his past when he saw a news story about the dedication of a new memorial to homosexual survivors from the era of Nazi persecution in Berlin's Tiergarten park.
"We didn't think there were any more (homosexual survivors) left, we thought they were all dead," says Uwe Neumärker, the director of Berlin's Holocaust Memorial. The memorial is comprised of 2,711 concrete slabs commemorating the 6 million Jews murdered during the Holocaust. Neumärker is also responsible for another memorial site located just across the street. Hidden between trees, it features a single slab almost identical to those in the main memorial. It was erected to honor the memory of the homosexual victims of Nazi persecution.
But the memorial has also been the source of some concern for Neumärker. Attacks have been perpetrated against the site, and the memorial is also the subject of an ongoing dispute over what it is actually intended to honor. Is it supposed to be a memorial remembering the estimated 6,000 gay men murdered under Hitler? Or should it also honor the memory of lesbians even though they weren't forced into concentration camps?
When Brazda came on to the scene in Berlin, it was like a ghost of the past appearing, albeit a very pleasant one. "Suddenly this nice old guy appeared from out of nowhere," Neumärker recalls of the visit Brazda made to the Berlin memorial during the summer of 2008. The cheerful nonagenarian reveled in all the attention, the cameras and the bouquets of flowers. He also flirted unabashedly with Berlin's openly gay mayor, Klaus Wowereit. Photos taken during the visit show Wowereit stroking Brazda's hair in front of the memorial -- a belated gesture of amends for a man who is nearly 100.
Visiting his hospital room now, one would love to ask Brazda more questions about his past and how he feels today. He has woken from a short nap and is eating a piece of apricot cake. It's a beautiful day outside, the sun is shining and a letter from Wowereit has just arrived. Wowereit felt sorry that Brazda had to cancel a recent trip to Berlin. Brazda finishes reading the letter and kisses it, his face filled with a beaming expression.
Refuge in Photos
Brazda is almost completely deaf, and he has a tough time understanding questions. But he still has good eyesight, and the best way for anyone interviewing him to get the man talking is to show him pictures from the past. Snapshots from his home state of Thuringia, from the town of Meuselwitz where he lived before being arrested by the Nazis, and of the Phönix public swimming pool located next to a coal factory. It was here in the summer of 1933 that Brazda, who was 20 years old at the time, met his first love. Looking at the old photograph seems to cheer him up -- he perks up and smiles.
Ever the comedian, Brazda says, "I pushed him into the water in order to make his acquaintance."
In another picture, Brazda can be seen posing with five friends, all dapper in suits and ties, looking happy and relaxed. At that time, life in the German countryside was apparently still more open for gay men than in the big cities, where the Nazis had already started their campaign of persecution against homosexuals.
"It was a wonderful time, we had so much fun," Brazda reminisces. He even staged a mock wedding to marry his boyfriend, with his mother and siblings joining in the celebration. Nobody seemed to mind that the young men had even gotten a fake priest to bless their union.
The Nazi Witchhunt Against Homosexuals
Their faux wedding took place in the summer of 1934, around the same time Adolf Hitler ordered the shooting of Ernst Röhm, the head of the SA -- the Sturmabteilung or Stormtroopers -- and the execution of his cronies in the elite paramilitary unit. Although the Stormtroopers had played a key role in Hitler's rise to power, they now stood in his way. Hitler used the false pretense of purging homosexuals from Nazi ranks as a way of ridding himself of Röhm and his followers (or even opponents he deemed a threat to his power).
Shortly thereafter, the Nazi witchhunt against homosexuals began in earnest. On July 2, the Meuselwitzer Tageblatt, the local newspaper in Brazda's town, even joined in the homophobic fray by railing against what it called the "lust boys" in the SA. "Our Führer has given the order for the merciless extermination of these festering sores," the paper wrote.
Rudolf Brazda, who died on August 3 aged 98 was the last known survivor of the thousands of men who were sent to Nazi concentration camps for being homosexual.
Some six million Jews perished in the Holocaust. The Nazis also killed Gypsies, Jehovah’s Witnesses and political opponents. And they persecuted gay men. Heinrich Himmler was obsessed with the idea that homosexuality was an infectious disease, endangering the “National Sexual Budget”. Gay men were seen as obstacles to Hitler’s programme to increase the master race.
Estimates suggest that between 10 and 15 thousand gay men from all over Europe were sent to the concentration camps where, like other inmates, they had to wear coloured badges to denote the nature of their “crimes”. The red triangle was for political prisoners, green for common criminals, blue for would-be emigrants from Germany, purple for Jehovah’s Witnesses, black for Gypsies and other “antisocials”, and pink for homosexuals. Jews wore a yellow triangle with a triangle of another colour superimposed to make a Star of David.
Although homosexuals constituted one of the smallest categories in the camps, they were often treated with a special ferocity — subjected to beatings, “extermination through labour” in the quarries, castration and medical experiments to make them “normal”; they also often suffered the homophobia of their fellow inmates.
Brazda was living an openly gay life in Leipzig when Hitler came to power. Though homosexuality was technically illegal, the Weimar Republic was largely tolerant: “We had our own meetings. There was a dance club in Leipzig where we would often meet,” Brazda recalled. “There was great freedom for us. I couldn’t imagine anything else. Then we started hearing about Hitler and his bandits.”
The Nazis expanded anti-gay laws to make homosexual acts a felony and, in 1934, began raiding gay bars in big cities. In 1937 Brazda was denounced and arrested for “unnatural lewdness”. After a month in custody, presented with love letters and poems he had written to his then partner, he “confessed” to the relationship and was imprisoned for six months for “debauchery”.
After his release Brazda, who had Czechoslovak citizenship though he did not speak the language, was deported to Czechoslovakia. He moved to the spa town of Karlsbad in the German-speaking Sudetenland. There he joined a theatre troupe, developing a popular tribute act to Josephine Baker, and stayed on even after the Nazis occupied the Sudetenland in 1938.
Arrested for a second time in 1941, Brazda spent another six months behind bars, then, in August 1942, was sent to Buchenwald where he was given the number 7952, and made to sew the pink triangle on to his camp uniform: “I didn’t understand what was happening but what could I do? Under Hitler you were powerless,” he recalled.
Two guards at the camp saved Brazda’s life. The first, apparently himself gay, removed Brazda from the “punishment battalion” at the local quarry and secured him a posting to lighter duties in the quarry’s infirmary. Several months later, Brazda joined the roofers unit, part of the “Bauhof” kommando in charge of maintaining the concentration camp buildings. As part of the kommando he was given extra food rations.
Then, just before liberation, when the camp’s prisoners were rounded up for a “death march” to another camp at Flossenburg, a second guard hid Brazda in the camp’s animal pen. “He put me in a shed with the pigs, made me a bed and I lay there for 14 days until the Americans came. After that, I was a free man,” he recalled.
Rudolf Brazda was born on June 26 1913 in Brossen, in the central German state of Thuringia. His parents were originally from Bohemia and Rudolf was the youngest of their eight children. His father worked at the local brown coal mines, but died in a work accident in 1922 when his youngest son was nine years old.
After leaving school, Brazda trained as a roofer, having failed in his ambition to become a sales assistant with a gentlemen’s outfitter. Aged 20 he met his first boyfriend, Werner, at a dance in Leipzig and they moved in together. Indeed such was the tolerant atmosphere of the time that the pair went through a ceremony of “marriage”, with Brazda’s mother and siblings serving as witnesses.
Brazda had his first encounter with Nazi brutality at Café New York, a well-known haunt of Leipzig’s gay community: “The Nazi stormtroopers dragged us out by our hair,” he recalled. After the closure of gay pubs and meeting places, a more systematic persecution began.“We gays were like hunted animals. Wherever I went with my companion the Nazis were always already there.”
In 1936 Werner was enlisted to do his military service and Brazda took up a position as a bell boy at Leipzig hotel, where he was arrested the following year. Werner, meanwhile, is believed to have been killed on active service in 1940. During his time in the Sudetenland, Brazda settled in with a new companion called Anton, and one of his most enduring and painful memories of Buchenwald was when an SS man ripped a gold chain that Anton had given him from his neck.
Yet he was acutely aware of his good fortune in surviving the camp and retained vivid memories of some of the 650 “Pink Triangles” deported to Buchenwald who were not so lucky. One young man had gouged his own eyes out on arrival at the camp so that he would be sent to the infirmary rather than the quarry. “The only thing that was waiting for him in the infirmary was a lethal injection. I never saw him again.”
Within the roofers’ kommando, Brazda made friends with a French communist from Alsace. After the camp’s liberation he followed him back to Mulhouse and decided to make his home there.
At a costume ball in the 1950s, Brazda met Edouard, an ethnic German who had been expelled from Yugoslavia and who became his companion. In the early 1960s they moved into a house they had built in the suburbs of Mulhouse, where Brazda cared for Edouard after he was crippled by a work accident in the 1970s. He continued to live there after Edouard’s death in 2003.
For decades Brazda did not speak about what had happened to him. Homosexuality was not decriminalised in France until 1982. It was only in May 2008, when Berlin’s openly gay mayor Klaus Wowereit unveiled a memorial to homosexuals persecuted in the Third Reich, that Brazda decided to speak out. He had been watching the ceremony on television and picked up the phone to correct a claim by the organisers that the last witness had died three years earlier. Three weeks later Klaus Wowereit went through the ceremony again. Standing at his side, clutching a red rose, was a white-haired, still wildly flirtatious nonagenarian.
Brazda’s reappearance led to invitations to attend a number of gay events, including Europride Zurich in 2009. Last year he took part in Mulhouse in the unveiling of a plaque in memory of homosexual victims of the Nazis and was a guest of honour at a remembrance ceremony at Buchenwald.
This year the German journalist Alexander Zinn published his biography and in April Brazda was appointed a Knight of the Legion of Honour.
Part 2: The Nazi Persecution Begins
Brazda went on with his day-to-day life as if nothing had happened -- at least he tried to. By that point, he had moved in together with his boyfriend, and they would hold hands in public and go to village festivals and the annual summer market with their other gay friends. If locals shot them disapproving looks, Brazda and his friends would pretend to be an especially boisterous soccer team.
But Brazda only seems to remember parts of the story when he looks at those pictures from the summer of 1934-- the good parts. He has gaps in his memory. One of the few friends that Brazda still recognizes, Alexander Zinn, is sitting next to Brazda's hospital bed and helping with the interview by blaring the reporter's questions into Brazda's ear while showing him the old photos. Zinn, an author and sociologist, first met Brazda three years ago. He was old at the time, but still sprightly. The author had been researching Brazda's story when he came across the criminal file from the concentration camp survivor's trial. The two men then traveled together to Meuselwitz and the former concentration camp in Buchenwald.
"I had always been blessed with good fortune," Brazda told his new friend. Zinn would go on to use it as the title of his new book about Brazda's life.
Blessed with good fortune? For Christmas 1936, their last together, Brazda gave his boyfriend a large chocolate heart. While the two were celebrating the holiday, police and prosecutors were busy tightening the noose. Now that the Nazis had rid the big cities of the "festering sores," they had turned their attention to stamping out homosexuality in the countryside. Their strategy was to arrest Meuselwitz's gays, interrogate them and get them to make incriminating statements against one another.
On April 8, 1937, Brazda finally got caught in their noose. At first, he insisted that he was not "attracted to men whatsoever." The official investigating Brazda's case, however, noted that the accused displayed the "typical appearance of a man with homosexual tendencies." Officials also presented further pieces of "evidence" like letters and love poems.
Buchenwald's 'Punishment Battalion'
Following a month in custody, Brazda finally collapsed in tears and confessed his "crimes." A short time later, he was sentenced to six months in prison because, according to the verdict, "he felt love for his friend" instead of "conquering his unnatural urges."
Four years later, the Nazis arrested Brazda a second time, and in August 1942, he was sent to the Buchenwald concentration camp. Zinn's book, recently published in German, is full of the crazy tales Brazda told him about concentration camp life a few years earlier, when he was still lucid enough to do so. Almost all homosexual prisoners landed in the so-called "punishment battalion," where they were subjected to excessive forced labor. Separated from the rest of the camp by barbed wire, they started work at the quarry in the early hours of the morning. "Extermination through labor," was the SS's strategy for homosexual prisoners.
But Brazda was spared. He had caught the eye of a political prisoner who worked as a so-called "Kapo," camp inmates appointed by the SS to oversee the quarry work gangs. The man who was feared for his brutality by other prisoners told Brazda to "set his shovel down." After that, Brazda was allowed to work in the medical barracks and dress injuries and wounds.
"One day I was alone in the clinic when the Kapo guy came in," says Brazda. "He took me in his arms and kissed me -- he had his hands all over me." Brazda let the Kapo have his way with him in order to escape the quarry and a slow death by exhaustion.
Ostracism for Gays after War
After working as a medical orderly for a while, he was given a job as a roofer, and then Brazda was moved to the camp's administrative office. Even as American troops advanced closer and closer to the camp and SS troops sent 28,000 camp prisoners out on a death march at the beginning of the spring, Brazda's good fortune never abandoned him.
"I had a friend, a Kapo, who hid me in the pig stalls," Brazda says. On April 11, the American army liberated the camp. Afterwards, Brazda moved to Mulhouse, France, where he still lives today.
Neumärker of the Berlin Holocaust Memorial says Brazda has been on his mind a lot since he came forward in 2008. Suddenly there was a fate, a face that could be attached to his gay memorial.
"The especially tragic thing about this group of victims is the fact that, after they were persecuted by the Nazis, they were then subjected to another form of ostracism after the war," says Neumärker. Neither Brazda nor the bulk of his fellow homosexual survivors of Nazi persecution ever received reparations after 1945.
For the past year, a commission in Berlin has been busy with the task of trying to determine the future of the memorial. The lone concrete slab features a small window through which visitors can view a looping video in which two young men from modern-day Germany can be seen kissing.
An Update for the Memorial
Now the commission wants a different video for the memorial, one that is more inclusive than two men kissing. The commission held a competition and received 13 proposals before selecting five finalists for the final round of decision-making. After months of controversial wrangling and consulting, the commission finally made a decision. They agreed the new video should also show lesbian couples kissing.
"The memorial has to remain contemporary," Neumärker says.
Others have been critical of plans to include lesbians. Brazda biographer Zinn told the news agency AFP in 2010 the plan to depict lesbians is an inaccurate depiction of history. "Historical truth must remain the focus," he said, as no lesbians were targeted during the Holocaust.
Brazda himself isn't sure what he should think about the memorial debate. "People need to know that we homosexuals were persecuted," he says, pausing for effect, "by people who themselves were also gay."
Brazda has grown tired. He glances over at Zinn, rallies a bit of energy and then starts flirting again. "I wish we could have had something together," he says to the man who is almost 60 years his junior. He then smiles and adds, "Whenever I am in the mood for love, I will think of you."
When Zinn first came to visit Brazda in France's Alsace region three years ago, Brazda was so excited and so lonely -- most of his friends had already passed away -- that he gave his house a fresh coat of paint for the occasion. All the attention, the memorial and now the book have been something of a second coming out for Brazda.
"Are you afraid of death?" Zinn shouts into his ear. Brazda is lost in his thoughts and doesn't reply immediately.
"Everyone has lives his own life, and I have lived mine," he answers. "The main thing is to be happy." He says he is appreciative of the freedom that today's young people enjoy. "Everyone is free to do what he wants."
It's time to end the visit and say goodbye.
"Whatever happens, happens," he says. "I'm not scared." He then closes his eyes again and dozes off.
Born in Thale, Germany, at eighteen, Becker fell in love with an older man, with whom he lived for nearly ten years. In 1935, he was arrested on suspicion of violating Paragraph 175 and sentenced to three years in prison at Nürnberg. On his release he joined the German army and served on the Russian front until 1944. He died in Hamburg, Germany.
L. D. Classen von Neudegg
L. D. Classen von Neudegg was a Holocaust survivor who was imprisoned in a Nazi concentration camp because of his homosexuality. He wrote about his experiences in 1954 in the German magazine Humanitas. His account is one of the most significant records of the experience of homosexuals during the holocaust.
Heinz Dörmer (1912–2001) was a German man who was imprisoned by the Nazis for homosexuality under Paragraph 175. He was repeatedly released and rearrested, spending more than ten years in a variety of concentration camps and prisons.
Dörmer was born in Berlin, Germany. Deeply involved with church youth groups as a child, by age fifteen, he was frequenting Berlin's gay bars. Dörmer was 10 years old when he joined the German Youth Movement in 1922. In 1929, he founded his own youth group, called the "Wolfsring" (lit. "ring of wolves"), which combined sexual affairs, amateur theater performances, and travel. In 1932, he was promoted to youth leader and worked in the scout movement at a national level. He and his group tried to stay independent, but in October 1933 they were forced to join the Hitler Youth.Imprisonments
In April 1935, Dörmer was accused of homosexual activities with members of his troop, and from 1941 to 1944 he was imprisoned for corrupting the youth at Neuengamme concentration camp, a "holding tank for homosexuals, politicals, and non-German aliens."Post-war life
After the war, Dörmer spent another eight years in prison on various charges. After his final release in 1963, he returned to Berlin to live with his father, who died in 1970. His 1982 application for reparations from the German government was rejected. He made an appearance in the 2000 documentary film Paragraph 175, which portrays survivors of persecution then-authorized under the German anti-male homosexuality law of the same name.
Friedrich-Paul von Groszheim
Friedrich-Paul von Groszheim (April 27, 1906 – c. 2003) was a German man who was imprisoned by the Nazis for the crime ofhomosexuality under Germany's now-repealed Paragraph 175. He was born in Lübeck, Germany
Von Groszheim was one of 230 men arrested in Lübeck on suspicion of being gay by the SS in January 1937. He was imprisoned for ten months, during which he had to wear a badge emblazoned with a capital A, for Arschficker ("arse-fucker"):
They beat us to a pulp. I couldn't lie down...my whole back (was) bloody. You were beaten until you finally named names
Because of the castration, von Groszheim was rejected as physically unfit for military service in 1940. In 1943 he was arrested a third time, this time as a supporter of the former Kaiser Wilhelm II, and imprisoned as a political prisoner at Neuengamme concentration camp.
Von Groszheim settled in Hamburg, Germany. In 1995, he was one of eight signers to a declaration given to the US Holocaust Memorial Museum in Washington, D.C. that called for the "memorializing and documenting of Nazi atrocities against homosexuals and others."
October 28, 1915
Karl Lange (born October 28, 1915, date of death unknown) was imprisoned by the Nazis for the then crime of homosexuality under the criminal code's Paragraph 175, which defined homosexuality as an unnatural act.
In 1935, when Lange was twenty years old, an informer told the police that he had been having secret meetings with a fifteen-year-old youth, and he was arrested. He was released after fifteen months but re-arrested in 1937 and imprisoned at Fuhlsbüttel prison, and then was part of a group that was transferred to Waldheim prison inSaxony.
Transferred to Waldheim in 1943, Karl had a nervous breakdown and was in the prison hospital when Soviet troops liberated the camp on May 3, 1945.
"Today I know that I was lucky," he recalled. He found a job in a Hamburg bank in 1945, but after 18 months his employer ordered him to get a "certificate in good standing" from the local police. Because of his conviction under Paragraph 175, he could not and was fired. Not until the 1960s was he able to find employers willing to overlook the conviction by the Nazis.
Kurt von Ruffin
1901~17 November 1996 | Germany
Von Ruffin began his career as a singer. Starting in 1927 he sang with the operas of Magdeburg, Mainz, and Nuremberg. He made his film debut in 1931 in Die Faschingsfee and Walzerparadies, also starring in Harry Piel's Bobby geht los in the same year. For the latter film von Ruffin took boxing lessons from heavyweight champion Hans Breitensträter.
After completing filming on Schwarzwaldmädel in 1933, von Ruffin was denounced as a homosexual by another gay man who named him under torture, and imprisoned at Lichtenburg, where many gay men were imprisoned, for two years. Von Ruffin says that SS guards touched prisoners and then beat those who got sexually aroused. He also recalls being forced to watch as some prisoners were beaten to death.
Von Ruffin went on to star in five more movies: Königswalzer (1935), Die Geige lockt (1935), Schwarze Rosen (1935), Die Stunde der Versuchung (1936), and Du bist so schön, Berlinerin (1936) before he was finally prohibited from appearing in any more films. From 1941 until the end of the war, he appeared only on stage.
After the war, von Ruffin appeared in several more films, including Ich mach' Dich glücklich (1949), Der blaue Strohhut (1949), Neues vom Hexer (1965), Die Herren mit der weissen Weste (1970), and his last, Der Unbesiegbare (1985).
November 28, 1887~ July 2, 1934
Julius Röhm and Emilie Röhm
Ernst Julius Röhm, (November 28, 1887 – July 2, 1934) was a German officer in the Bavarian Army and later an early Nazi leader. He was a co-founder of the Sturmabteilung ("Storm Battalion"; SA), the Nazi Party militia and later was the SA commander. In 1934, as part of theNight of the Long Knives, he was executed on Hitler's orders as a potential rival.
Ernst Röhm was born in Munich, the youngest of three children (older sister and brother). His father, a railway official, was described as "a harsh man". Although the family had no military tradition, Röhm entered the Royal Bavarian 10th Infantry Regiment Prinz Ludwig at Ingolstadt as a cadet on 23 July 1906. He obtained his commission on 12 March 1908. At the outbreak of war in August 1914, he was adjutant of the 1st Battalion, 10th Infantry Regiment König. The following month, he was seriously wounded in the face at Chanot Wood in Lorraine, and carried the scars for the rest of his life. He was promoted to senior lieutenant (Oberleutnant) in April 1915. During an attack on the fortification at Thiaumont, Verdun, on 23 June 1916, he sustained a serious chest wound. As a result, he spent the remainder of the war in both France and Rumania as a staff officer. He was awarded the Iron Cross First Class on 20 June 1916, just before he was wounded at Verdun, and was promoted to captain (Hauptmann) in April 1917. In October 1918, while serving on the Staff of the Gardekorps, he contracted the deadly Spanish influenza and was not expected to live; however, he survived and recovered after a long period of convalescence.
Following the armistice on 11 November 1918 that ended the war, Röhm continued his military career as an adjutant in the Reichswehr. He was one of the senior members in Colonel von Epp's Bayerisches Freikorps für den Grenzschutz Ost, formed at Ohrdruf in April 1919, which finally overturned the Red Republic in Munich by force of arms on 3 May 1919. In 1919, he joined the German Workers' Party, which soon became the National Socialist German Workers Party (NSDAP). Röhm met Adolf Hitler and they became political allies and close friends.
Röhm's resignation from the Reichswehr was accepted in November 1923 during his time as a prisoner at Stadelheim prison. Following the failed Beer Hall Putsch on 9 November 1923, Röhm, Hitler, General Erich Ludendorff, Lt-Colonel Kriebel and six others were tried in February 1924 on charges of treason. Röhm was found guilty and received one year and three months in prison. However, the sentence was suspended and he was granted a conditional discharge. Hitler was also found guilty and was sentenced to five years imprisonment, although he would only serve nine months.
In April 1924, Röhm became a Reichstag Deputy for the völkisch National Socialist Freedom Party. He made only one speech, urging the release from Landsberg of Lt-Colonel Kriebel. At the 1925 elections the seats won by his party were much reduced, and his name was too far down the list for him to be returned to the Reichstag. While Hitler was in prison, Röhm helped to create the Frontbann as a legal alternative to the then-outlawed SA. At Landsberg prison in April 1924, Röhm had also been given full powers by Hitler to rebuild the SA in any way he saw fit. When in April 1925 Hitler and Ludendorff disapproved of the proposals under which Röhm was prepared to integrate the 30,000-strong Frontbann into the SA, on 1 May 1925 Röhm resigned from all political movements and military brigades and sought seclusion from public life. In 1928 he accepted a post in Bolivia as adviser to the Bolivian Army where he was given the rank of Lt-Colonel and took up his duties after six months' acclimatisation and language tutoring. Following the 1930 revolt in Bolivia Röhm was forced to seek sanctuary in the German Embassy. After the election results in Germany that September, Röhm received a telephone call from Hitler in which the latter said, "I need you", thus provoking Röhm's return to Germany.
In September 1930, as a consequence of the Stennes Revolt in Berlin, Hitler assumed supreme command of the SA as its new Oberster SA-Führer. He sent a personal request to Röhm, asking that he return to serve as the SA's chief of staff. Röhm accepted this offer and commenced his new assignment in early January 1931. Röhm brought radical new ideas to the SA and appointed several of his close friends to its senior leadership.
The SA now numbered over a million. Its traditional function of party leader escort had been given to the SS, but it continued its street battles with "Reds" and attacks on Jews. The SA also attacked or intimidated anyone deemed hostile to the Nazi programme: editors, professors, politicians, uncooperative local officials or businessmen.
Under Röhm, the SA also often took the side of workers in strikes and other labour disputes, attacking strikebreakers and supporting picketlines. SA intimidation contributed to the rise of the Nazis, breaking down the electoral activity of the left-wing parties. However, the SA's reputation for street violence and heavy drinking was a hindrance.
Another hindrance was the more or less open homosexuality of Röhm and other SA leaders such as his deputy Edmund Heines (both of whom would later be sentenced to death on Hitler's orders). In 1931, the Münchener Post, a Social Democratic newspaper, obtained and published Röhm's letters to a friend in which Röhm discussed his sexual affairs with men.
Röhm with Hitler, August 1933
By this time, Röhm and Hitler were so close that they addressed each other as du (the German familiar form of "you"). Röhm was the only top Nazi that Hitler addressed as such. In turn, Röhm was the only Nazi who dared address Hitler as "Adolf," rather than "mein Führer."
As Hitler secured national power in 1933, SA men became auxiliary police, and marched into local government offices to force officials to hand over authority to Nazis.
Röhm and the SA regarded themselves as the vanguard of the "National Socialist revolution." After Hitler's takeover, they expected radical changes in Germany, with power and rewards for them. However, Hitler's use of the SA as storm troopers was a political weapon he no longer needed.
Along with Joseph Goebbels, Gottfried Feder and Walther Darré, Röhm was a prominent member of the party's "socialist" faction. This group took the words "Sozialistische" and "Arbeiter" ("worker") in the party's name literally. They largely rejected capitalism (which they associated with Jews) and pushed fornationalisation of major industrial firms, expanded worker control, confiscation and redistribution of the estates of the old aristocracy and social equality. Röhm spoke of a "second revolution" against "reactionaries" (the National Socialist label for old-line conservatives), as the National Socialists had previously dealt with the Communists and Socialists.
All this was threatening to the business community, which had supported Hitler's rise to power. So Hitler swiftly reassured businessmen that there would be no "second revolution." Many "storm troopers" were of working-class origins and had expected a socialist programme. In fact, it was often said at the time that members of the SA were like a beefsteak — "brown on the outside and red on the inside". They were now disappointed by the new regime's lack of socialist direction and also failure to provide the lavish patronage expected. Röhm even publicly criticized Hitler for his failure to carry through the National Socialist revolution.
Furthermore, Röhm and his SA colleagues thought of their force (now over three million strong) as the future army of Germany, replacing theReichswehr and its professional officers. Although Röhm had been a member of the officer corps, he viewed them as "old fogies" who lacked "revolutionary spirit." In February 1934, Röhm demanded that the Reichswehr (which under the Treaty of Versailles was limited to 100,000 men) be absorbed into the SA under his leadership as Minister of Defence.
This horrified the army, with its traditions going back to Frederick the Great. The army officer corps viewed the SA as a brawling mob of undisciplined street fighters and were also concerned by the pervasiveness of homosexuality and "corrupt morals" within the ranks of the SA. Further, reports of a huge cache of weapons in the hands of SA members gave the army commanders even more concern. The entire officer corps opposed Röhm's proposal, insisting that honour and discipline would vanish if the SA gained control. However, it appeared that Röhm and the SA would settle for nothing less.
Hitler privately shared much of Röhm's animus toward the traditionalists in the army. Nevertheless, he had gained power with the army's support, and he wanted the army's support to succeed the ailing 86-year-old Paul von Hindenburg as President.
Meanwhile, Hitler had already begun preparing for the struggle. In February he told British diplomatAnthony Eden that he planned to reduce the SA by two thirds. Also in February, he announced that the SA would be left only a few minor military functions.
Röhm responded with further complaints about Hitler and began expanding the armed elements of the SA. To many it appeared as if the SA was planning or threatening a rebellion. In March, Röhm offered a compromise in which a few thousand SA leaders would be taken into the army, but the army promptly rejected it.
On 11 April 1934, Hitler met with German military leaders on the ship Deutschland. By this time, Hitler had learned that the ailing Hindenburg would die before the year's end. Hitler informed them of Hindenburg's declining health and proposed the Reichswehr support him as Hindenburg's successor. In exchange, Hitler offered to reduce the SA, suppress Röhm's ambitions, and guarantee the Reichswehr would be Germany's only military force. William L. Shirer asserts that Hitler also promised to expand the army and navy.
However, both the Reichswehr and business conservatives continued their anti-SA complaints to Hindenburg. In early June 1934, defence minister Werner von Blomberg, on Hindenburg's behalf, issued an ultimatum to Hitler: unless political tension ended in Germany, Hindenburg would likely declare martial law and turn over control of the country to the army. Knowing such a step could forever deprive him of power, Hitler decided to carry out his pact with the Reichswehr to suppress the SA. This meant a showdown with Röhm. In Hitler's view, the army and the SA constituted the only real remaining power centres in Germany that were independent in his National Socialist state.
The army was willing to submit. Blomberg had the swastika added to the army's insignia in February and ended the army's practice of preference for "old army" descent in new officers, replacing it with a requirement of "consonance with the new government."
Although determined to curb the power of the SA, Hitler put off doing away with his long-time comrade to the very end. A political struggle within the party grew, with those closest to Hitler, including Prussian premier Hermann Göring, Propaganda Minister Joseph Goebbels andSS Chief Heinrich Himmler positioning themselves against Röhm. As a means of isolating Röhm, on 20 April 1934, Göring transferred control of the Prussian political police (Gestapo) to Himmler, who, Göring believed, could be counted on to move against Röhm. Himmler, Heydrichand Göring used Röhm's published anti-Hitler rhetoric to support a claim that the SA was plotting to overthrow Hitler. Himmler and his deputy Heydrich, chief of the SS Security Service (the SD), assembled a dossier of manufactured evidence to suggest that Röhm had been paid twelve million marks by France to overthrow Hitler. Leading officers were shown falsified evidence on June 24 that Röhm planned to use the SA to launch a plot against the government (Röhm-Putsch).
By this time, these stories were officially recognised. Reports of the SA threat were passed to Hitler and he knew it was time finally to act. Meanwhile Göring, Himmler, Heydrich and Victor Lutze (at Hitler's direction) drew up lists of people in and outside the SA to be killed. Himmler and Heydrich issued marching orders to the SS, while Sepp Dietrich went around showing army officers a purported SA execution list.
Meanwhile, Röhm and several of his companions went away on holiday at a resort in Bad Wiessee. On June 28, Hitler phoned Röhm and asked him to gather all the SA leaders at Bad Wiessee on June 30 for a conference. Röhm agreed, apparently unsuspicious.
The date of June 30 marked the beginning of the Night of the Long Knives. At dawn on 30 June, Hitler flew to Munich and then drove to Bad Wiessee, where he personally arrested Röhm and the other SA leaders. All were imprisoned at Stadelheim Prison in Munich. From 30 June to 2 July 1934, the entire leadership of the SA was purged, along with many other political adversaries of the Nazis.
Hitler was uneasy authorizing Röhm's execution and gave Röhm an opportunity to commit suicide. On July 2, Röhm was visited by SS-Brigadeführer Theodor Eicke (then Kommandant of the Dachau concentration camp) and SS-Obersturmbannführer Michael Lippert, who laid a pistol on the table, told Röhm he had ten minutes to use it and left. Röhm refused and stated "If I am to be killed, let Adolf do it himself." Having heard nothing in the allotted time, Eicke and Lippert returned to Röhm's cell to find him standing. Röhm had his bare chest puffed out in a gesture of defiance as Lippert shot him in the chest at point blank range. He was buried in the Westfriedhof (Western Cemetery) in Munich.
The purge of the SA was legalized the next day with a one-paragraph decree: the Law Regarding Measures of State Self-Defence. At this time no public reference was made to the alleged SA rebellion; instead there were generalised references to misconduct, perversion, and some sort of plot. John Toland noted that Hitler had long been privately aware that Röhm and his SA associates were homosexuals; although he disapproved of their behaviour, he stated that 'the SA are a band of warriors and not a moral institution.' National Socialist propaganda now made use of their sexual orientation as justification of the executions.
A few days later, the claim of an incipient SA rebellion was publicised and became the official reason for the entire wave of arrests and executions. Indeed, the affair was labeled the "Röhm-putsch" by German historians, though after World War II it has usually been modified as the "alleged Röhm-putsch" or known as the "Night of the Long Knives." In a speech on July 13, Hitler alluded to Röhm's homosexuality and explained the purge as chiefly defence against treason
August 16, 1923~November 25, 2005
Pierre Seel (August 16, 1923, Haguenau, Bas-Rhin – November 25, 2005) was a gay Holocaust survivor and the only French person to have testified openly about his experience of deportation during World War II due to his homosexuality.
Pierre was the fifth and last son of an affluent Catholic Alsatian family, and he was born at the family castle of Fillate in Haguenau. At the age of eleven, he discovered that his younger sister, Josephine (Fifine to him), was in fact his cousin, adopted by his father when her mother died. His father ran a successful patisserie-confiserie shop on Mulhouse's main street (at 46 rue du Sauvage). His mother, Emma Jeanne, once director of a department store, joined the family business when she married. By his late teens, Pierre Seel was part of the Mulhouse (Alsace) gay and Zazou subcultures. He suspected that his homosexuality was due to the repressive Catholic morals of his family which forbade him to show interest in girls his age during his early teens. He found it difficult to come to terms with and accept his homosexuality, and described himself as short tempered.
In 1939, he was in a public garden (le Square Steinbach) notorious as a "cruising" ground for men. While he was there, his watch was stolen, a gift that his godmother had given to him at his recent communion. Reporting the theft to the police meant that, unknown to him, his name was added to a list of homosexuals held by the police (homosexuality had not been illegal in France since 1792; the Vichy Regime did not, contrary to legend, recriminalize homosexuality, but in August 1942 it did outlaw sexual relations between an adult and a minor under twenty-one). The German invasion curtailed Seel's hopes of studying textiles in Lille. He completed vocational training in accounting, decoration andsales and found a sales assistant job at a neighbouring shop.
On 3 May 1941, Seel was arrested. He was tortured and raped with a piece of wood. He was then sent to the city jail before being transferred on 13 May 1941 to the Schirmeck-Vorbrück camp, about 30 km west of Strasbourg. His prison uniform was marked with a blue bar (marking Catholic and "a-social" prisoners) rather than the infamous pink triangle which was not in use at Schirmeck. He later noted: "There was no solidarity for the homosexual prisoners; they belonged to the lowest caste. Other prisoners, even when between themselves, used to target them."
On 6 November 1941, after months of starvation, ill treatment and forced labour, Seel was set free with no explanation and made a German citizen. He was sworn to secrecy about his experience by Karl Buck, the commander of the camp. He was made to report daily to the Gestapo offices.
The rest of the war
Between 21 March and 26 September 1942, Seel was forced to join the RAD (Reichsarbeitsdienst) to receive some military training. First, he was sent to Vienna as an aide-de-camp to a German officer. Then, it was a military airport in Gütersloh near the Dutch-German border.
On 15 October 1942, he was incorporated to the Wehrmacht and become one of the "malgré-nous" (despite ourselves), young men born in Alsace or Lorraine enrolled against their will into the German army who had to fight with their enemies against the people they supported. During the next three years, he criss-crossed Europe without much recollections of events, places and dates. This time he was sent to Yugoslavia. While fighting the local resistance, he and his fellow soldiers burned isolated villages inhabited by women and children only. One day he found himself in front of a partisan who broke Seel's jaw, as a result of which he soon lost all his teeth. The man did not recover from the ensuing fight. Wounded, Seel was sent to Berlin in an administrative position.
In spring 1943, to his bemusement, Seel was sent to Pomerania to a Lebensborn, one of a dozen places in the Reich dreamed up by Heinrich Himmler and dedicated to breeding a new race according to the Nazis' standards of Aryan "purity"; Young, healthy couples were encouraged to procreate and give their children to the Reich. He only stayed there a few days.
In summer 1943, he volunteered to join the Reichsbank and became a teller on trains for soldiers on leave between Belgrade and Salonica. This ended with the attempt on Hitler's life on 20 July 1944, which demanded a strengthening of authority. Seel found himself helping the civilian population in the Berlin underground during a 40 days and nights attack by the Allies.
While things started to unravel for the Reich, Seel was sent to Smolensk on the Russian front. After having allowed the horse of the officer he was serving to run away, Seel was sent to a dangerous and exposed position alone with another Alsatian. The enemy kept on firing at them and soon Seel's companion was killed. He spent three days there, close to madness, believing himself forgotten.
As the German debacle was becoming imminent, his commanding officer invited him to desert with him. Soon after, the officer got killed and Seel found himself alone and decided to surrender to the Soviet troops and started to follow them west. Somewhere in Poland, however, he found himself arrested and threatened to be shot as a part of reprisal execution after the murder of an officer. He saved his life by stepping forward in front of the firing squad and starting to sing the Internationale.
In Poland, Seel parted ways with the Russian army and joined a group of concentration camp survivors soon to be brought back to France. The Red Cross soon took over and organised a train convoy. This however did not go west but south, through Odessa and the Black Sea, in terrible sanitary conditions. Seel was still in Poland on 8 May 1945 when the Armistice was declared. In Odessa, as he was put in charge of order in the refugee camp he was in, he contracted malaria. At this time he was also advised to change his name to Celle and hide the fact that he was Alsatian by saying he was from Belfort.
After a long wait in Odessa for a boat to take him back to France, "Pierre Celle" finally arrived in Paris on 7 August 1945 after a train journey through Europe, via Romania, Germany, the Netherlands and Belgium. Again, Seel found himself requisitioned for an administrative task, in this case, the ticking of the long lists of other refugees being sent home.
On reaching Mulhouse, Seel realized that he would have to lie about his true story and, like all the others, lie about the reasons for his deportation. "I was already starting to censor my memories, and I became aware that, in spite of my expectations, in spite of all I had imagined, of the long-awaited joy of returning, the true Liberation, was for other people."
After the war
After the end of the war, the Charles de Gaulle government cleaned up the French Penal Code, principally getting rid of the anti-Semitic laws. The article against homosexual relations between adults and minors, however, remained in force until 1982. The homophobic atmosphere of the 1940s-1960s meant that for the returning victims, the possibility of telling their story was thwarted by the fear of further stigmatisation. In his book, Seel also notes an increase of homophobic attacks in Mulhouse, after the war. In his family itself, Seel found a negative reaction to his homosexuality. His closest relatives decided to avoid broaching the subject while other members of the extended family made humiliating jokes. His godfather disinherited him.
After starting to work as a stock manager at a fabric warehouse, Seel set up an association to help the local destitute families by giving out food and clothes. He also cared for his ageing and ailing mother, with whom he grew close and the only person to whom he related his experience for over thirty years. For four years, the beginning of what he called the years of shame, Seel led a life of "painful sadness", during which he slowly came to decide that he must renounce his homosexuality. Following in his parents' footsteps, he contacted a dating agency and on 21 August 1950, he civilly married the daughter of a Spanish dissident (the religious marriage took place on 30 September 1950 atSaint-Ouen). He decided not to tell his wife about his homosexuality.
Their first child was still-born, but they eventually had two sons (1952 and 1954) and a daughter (1957). In 1952, for the birth of their second child, they moved near Paris, in the Vallée de Chevreuse, where Seel opened a fabric store which was not successful. He soon had to find work in a larger Parisian textile company. The family got involved with the local Catholic community. Seel found it difficult to relate to his children; He felt remote from his last born, while he did not know how to express his love for his two boys without it being misinterpreted.
The 1960s offered little stability to the family with moves to Blois, Orléans, Compiègne, Rouen and back to Compiègne, following Seel's career. This instability put further strains on his marriage. In 1968, Seel found himself trapped for four days in the besieged Sorbonne when he was sent as observer by his local Parents Association. He then went down to Toulouse where he was to check the family's new flat attached to his wife's new job in the administration. There, he was arrested under suspicion of stirring the young demonstrators. The family finally settled in Toulouse.
During the next ten years, Seel grew further from his wife, tormented by feelings of inadequacy, shame, and confusion about his sexuality. By the time he and his wife separated in 1978, he was already under tranquillisers. He started to drink and considered becoming homeless, even sleeping rough three times to test himself. After one of his sons threatened to never see him again if he didn't stop drinking, he joined a counselling group. In 1979, as he was working for an insurance company, still trying for reconciliation with his estranged wife, he attended a debate in a local bookshop for the launch of the French edition of Heinz Heger's testimony (The Men with the Pink Triangle which inspiredMartin Sherman to write the play Bent). After the event, Seel met with the speakers and a meeting was organised for the next day.
He joined his local branch of David et Jonathan, a gay and lesbian Christian association. On 9 April 1989, he returned to the sites of the Schirmeck and Struthof camps for the first time. He spent the last 12 years or so with his long-term partner, Eric Féliu, with whom he bred dogs in Toulouse, which helped him to overcome the fear of dogs he had developed after Jo's death. Seel died of cancer in Toulouse in November 2005. He is buried in Bram, in the Aude département.
In 1981, the testimony collected by Jean-Pierre Joecker (director and founder of the gay magazine Masques) was published anonymously in a special edition of the French translation of the play Bent by Martin Sherman. In April 1982, in response to anti-gay declarations and actions by Léon Elchinger, the Bishop of Strasbourg, Seel spoke publicly and wrote an open letter to the Bishop on 18 November. He simultaneously circulated the text to his family. The letter was published in Gai Pied Hebdo No 47 on 11 December. At the same time, he started the official process of getting compensation from the state.
From the time he came forward publicly until the end of his life, Seel was active as an advocate for the recognition of homosexual victims of the Nazis—and notably of the forgotten homosexual victims from the French territories of Alsace and Moselle, which had been annexed by Nazi Germany. Seel came to be known as the most outspoken activist among the men who had survived internment as homosexuals during the Third Reich. He was an active supporter of the Mémorial de la Déportation Homosexuelle, a French national association founded in 1989 to honor the memory of homosexuals persecuted by the Nazi regime and to advocate formal recognition of these victims in the ceremonies held annually to commemorate citizens and residents of France deported to the concentration camps.
In 1994, Seel published the book Moi, Pierre Seel, déporté homosexuel (I, Pierre Seel, Deported Homosexual), written with the assistance of journalist and activist Jean Le Bitoux, founder of the long-running French gay periodical Gai Pied; the book subsequently appeared in translation in English, German and Spanish. Seel appeared on national television and in the national press in France. His story also was featured in a 2000 documentary film on the Nazi persecution of homosexuals, Paragraph 175, directed by San Francisco filmmakers Rob Epstein and Jeffrey Friedman. Returning to Germany for the first time since the war, Seel received a five-minute standing ovation at the documentary's premiere at the Berlin film festival.
Seel also found himself under attack in the 1980s and 1990s, even receiving death threats. After he appeared on French television, he was attacked and beaten by young men shouting homophobic epithets. Catherine Trautmann, then the Mayor of Strasbourg and later a Socialist Party culture minister, once refused to shake his hand during a commemorative ceremony.
In 2003, Seel received official recognition as a victim of the Holocaust by the International Organization for Migration's program for aiding Nazi victims. In April 2005, President Jacques Chirac, during the "Journée nationale du souvenir des victimes et des héros de la déportation" (the French equivalent to the Holocaust Memorial Day), said: "In Germany, but also on French territory, men and women whose personal lives were set aside, I am thinking of homosexuals, were hunted, arrested and deported." On 23 February 2008, the municipality of Toulouserenamed a street in the city in honour of Seel. The name plaque reads "Rue Pierre Seel - Déporté français pour homosexualité - 1923-2005".
From I, Pierre Seel: Deported Homosexual by Pierre Seel, translated from French by Joachim Neugroschel, published by Basic Books, a division of Harpers Collins, 1995
"One day the loudspeakers ordered us to report immediately to the roll-call site. Shouts and yells urged us to be there without delay. Surrounded by SS men, we had to form a square and stand at attention, as we did for morning roll call. The commandant appeared with his entire general staff. I assumed he was going to bludgeon us once again with his blind faith in the Reich, together with a list of orders, insults and threats -- emulating the infamous outpourings of his master, Adolph Hitler. But the actual ordeal was far worse: an execution. Two SS men brought a young man to the center of the square. Horrified, I recognized Jo, my loving friend, who was only 18 years old. I hadn't previously spotted him in the camp. Had he arrived before or after me? We hadn't seen each other during the days before I was summoned by the Gestapo.
"Now I froze in terror. I prayed that he would escape their lists, their roundups, their humiliations. And here he was, before my powerless eyes, which filled with tears. Unlike me, he had not carried dangerous letters, torn down posters, or signed any statements. What had happened? What had the monsters accused him of? Because of my anguish I have completely forgotten the wording of the death sentence.
"The loudspeakers broadcast some noisy classical music while the SS stripped him naked and shoved a tin pale over his head. Next, they sicced their ferocious German shephards on him: the guard dogs first bit into his groin and thighs, then devoured him right in front of us. His shrieks of pain were distorted and amplified by the pain in which his head was trapped. My rigid body reeled, my eyes gaped at so much horror, tears poured down my cheeks, I fervently prayed that he would black out quickly.
"Since then I sometimes wake up howling in the middle of the night. For fifty years now that scene has kept ceaselessly passing and repassing through my mind. I will never forget the barbaric murder of my love -- before my eyes, before our eyes, for their were hundreds of witnesses..."
From Paragraph 175 a documentary by Rob Epstein and Jeffrey Friedman, 2000.
When Alsace-Lorraine was annexed by the Germans in 1940, the Nazis systematically began to weed out "anti-social" elements. They directed the French police to establish the notorious "Pink Lists" to keep track of homosexuals. One of their targets was 17-year-old Pierre Seel. Pierre was arrested after reporting a theft that occurred in a homosexual club. He was interrogated both about his sexuality and about his suspected involvement in resistance activities before being sent to the internment camp at Schirmeck. While there he was forced to build crematoria, at Struthof, a neighboring concentration camp, and was violated with broken rulers and used as a human dart board by camp orderlies with syringes. At the end of 1941, Pierre and thousands of other Alsatians were forced to join the German army. This was the ultimate humiliation: to be forced to fight on the side of the enemy. Having survived several allied bombings, he was eventually taken prisoner by the Russians, who gave him his freedom. After the war he was allowed back into his family under the condition that he never reveal the true circumstances of his arrest. He went into a downward spiral, entering a marriage of convenience and eventually becoming suicidal -- until deciding to take a stand and make his story public.
Mug shot of homosexual Auschwitz prisoner: August Pfeiffer, servant, born Aug. 8, 1895, in Weferlingen, arrived to Auschwitz Nov. 1, 1941, and died there Dec. 28, 1941.State Museum of Auschwitz, Oswiecim, Poland
Friedrich Althoff (b. May 16, 1899), a waiter from Duesseldorf.Nordrhein-Westfälische Hauptstaatsarchiv Düsseldorf
RW 58-61940 One man recounts how the Nazis' assumption of power in 1933 limited homosexuals' freedom and created an atmosphere of fear.more... In 1935 the Nazi regime revised Paragraph 175 of the German criminal code to make illegal a very broad range of behavior between men. This is the text of the revised law.more... Dr. Magnus Hirschfeld, a Jew and homosexual, founded the Institute for Sexual Sciences. Berlin, Germany, 1928.— Suddeutscher Verlag Bilderdienst, Munich Germanymore... The closing of the Eldorado, a club where homosexuals socialized. Berlin, Germany, March 5, 1933.— Landesarchiv Berlinmore... The Institute for Sexual Sciences during a Nazi raid. Berlin, Germany, May 6, 1933.— Landesarchiv Berlinmore... In a speech that Himmler gave before a conference of SS officers on February 17, 1937, he included remarks on the question of homosexuality.more... Friedrich-Paul von Groszheim, one of the 'forgotten victims' of the Holocaust, recently broke his silence to give testimony.— USHMMmore...
Lesbians in the Holocaust
"The non-criminalization of female homosexuality meant that lesbians were not intensively prosecuted in the same way or to the same degree as homosexual men. But they did suffer, for example, the same destruction of clubs and other organizations of the homosexual subculture, the banning of its papers and magazines, the closure or surveillance of the bars at which they met. This led to a dispersal of lesbian women and their withdrawal into private circles of friends. Many broke off all contacts for fear of discovery and even changed their place of residence. A collective lesbian life-style and identity, which had begun to take shape since the turn of the century and especially in the years of the Weimar Republic, was destroyed when the Nazi's came to power, and the effects would last well beyond the end of the 'Third Reich'.
"The exemption of female homosexuality from penal sanctions was one major reason why the registration and prosecution bodies set up within the Gestapo and the Criminal Police in the wake of Roem's murder in June 1934 mainly concentrated on the male homosexual 'enemy of the state'. The paucity of sources makes it impossible to gauge the extent to which lesbian women were also being compulsory registered -- for example, as a result of denunciation to authorities. Scattered evidence indicates that reports were collected about lesbians by the police, and also by other organizations such as the Race Policy Bureau of the NSDAP. But the scale of this is not known -- nor, above all, the consequences which followed from it.
"In only a few cases can it be demonstrated that women were tried on the pretext of other offenses but in reality because of their homosexuality. In one documented instance female homosexuality was cited by the administration of the Ravensbruck concentration camp as the grounds of detention. Thus, on 30 November 140 the transportation list for this women's camp names the day's eleventh 'admission' as the non-Jewish Elli S., exactly 26 years of age. The term 'lesbian' actually appears in the entry as the reason for detention. Elli S. was apparently put among the political prisoners, but nothing further is known of her fate.
"Other cases are known in which lesbians were punished as 'subversive of the military potential.' And, where a so-called relation of dependence existed between a superior and a subordinate or between a teacher and a school girl, the provision of paragraph 176 of the penal code would apply." [Hidden Holocaust? pg 12-13].
According to the documentary 2000 Paragraph 175, there are only five known cases of women being imprisoned solely because of their lesbianism, although other researchers have documented more cases. The experiences of Annette Eick were reported by in that film:
Born in 1909 to an educated, Jewish family in Berlin, Annette discovered her lesbian identity when she was ten: "We had to write a composition about how we imagined our later life would be, and I wrote: I want to live in the country with an elderly girlfriend and have a lot of animals. I don't want to get married and I don't want to have children, but I'll write." In the 1920s, Annette was active in lesbian cultural life in Berlin, spending time in women's clubs and occasionally writing poetry and short stories for a lesbian journal. As the Nazis gained power, Annette managed to emigrate to England, with the help of an older woman she had met at a bar and whom she had a crush on. She later learned that her parents had been killed at Auschwitz. She eventually settled in the English countryside with her lover of many years, and wrote poetry.
Henny Schermann and Mary Punjer were arrested in the raid of a lesbian bar in 1940 and taken to Ravensbruck. Both women were Jewish, and while their internment documents listed their Judiasm, it also noted the Schermann was a "compulsive lesbian" and Punjer as a "very active (sassy) lesbian". They were gassed to death in early 1942 in the Bernburg Nursing Home near Dessau, which had been adapted as a death factory.
Austrian writer and lesbian Grete von Urbanitzky propagated Nazi ideology in her writings as early as 1920, but did not escape their persecution. In 1936, she was forced to emigrate to France and then to Switzerland. In 1941, all her books were banned in Germany.
Other collaborators with the Nazi's experienced similar fates. For example, only a few months before the end of the war, a local leader of a Nazi women's organization in a small German city was arrested in January 1945 for lesbianism. She was interned in Ravensbruck and her ultimate fate is unknown.
PHOTO Annette Eick and friend from Paragraph 175 Official website of the documentary.
PHOTO Henny Schermann, internment photograph from Ravensbruck. United States Holocaust Memorial Museum
PERSECUTION OF HOMOSEXUALS IN THE THIRD REICH — PHOTOGRAPH
Identification pictures of a homosexual prisoner who arrived in Auschwitz on November 27, 1941, and was transferred to Mauthausen on January 25, 1942. Auschwitz, Poland
Identification pictures of a prisoner, accused of homosexuality, recently arrived at the Auschwitz concentration camp. Auschwitz, Poland, between 1940 and 1945.
Identification pictures of a prisoner, accused of homosexuality, who arrived at the Auschwitz concentration camp on June 6, 1941. He died there a year later. Auschwitz, Poland.
An official order incarcerating the accused in the Sachsenhausen concentration camp for committing homosexual acts.
Interior designer from Duesseldorf who was charged with homosexuality and imprisoned for 18 months. Duesseldorf, Germany, date uncertain.
A writer from Duesseldorf who was arrested for homosexuality. Duesseldorf, Germany, 1938.
An author and actor who was imprisoned in 1937 for 27 months for homosexuality. In 1942, he was deported to Sachsenhausen concentration camp where he was a prisoner for three years. Berlin, Germany, before 1937.
Identification pictures of a bartender from Duisburg who was arrested for homosexuality. Duisburg, Germany, August 27, 1936.
A couple dances at the "Eldorado," a nightclub frequented by members of Berlin's homosexual community. The nightclub, along with other similar establishments, was closed by the Nazi government in the spring of 1933. Berlin, Germany, 1929.
Homosexuals and the Third Reich
By James Steakley
“After roll call on the evening of June 20, 1942, an order was suddenly given: 'All prisoners with the pink triangle will remain standing at attention!' We stood on the desolate, broad square, and from somewhere a warm summer breeze carried the sweet fragrance of resin and wood from the regions of freedom; but we couldn't taste it, because our throats were hot and dry from fear. Then the guardhouse door of the command tower opened, and an SS officer and some of his lackeys strode toward us. Our detail commander barked: 'Three hundred criminal deviants, present as ordered!” We were registered, and then it was revealed to us that in accordance with an order from the Reichsfuhrung SS, our category was to be isolated in an intensified-penalty company, and we would be transferred as a unit to the Klinker Brickworks the next morning. The Klinker factory! We shuddered, for the human death mill was more than feared.”
Appallingly little information is available on the situation of homosexuals in Nazi Germany. Many historians have hinted darkly at the “unspeakable practices” of a Nazi elite supposedly overrun with “sexual perverts,” but this charge is both unsubstantiated and insidious. Upon closer examination, it turns out to be no more than the standard use of antigay prejudice to defame any given individual or group a practice, incidentally, of which the Nazis were the supreme masters. The Nazis were guilty of very real offences, but their unspeakable practices were crimes against mankind.
That homosexuals were major victims of these crimes is mentioned in only a few of the standard histories of the period. And those historians who do mention the facts seem reluctant to dwell on the subject and turn quickly to the fate of other minorities in Nazi Germany. Yet tens, perhaps hundreds of thousands of homosexuals were interned in Nazi concentration camps. They were consigned to the lowest position in the camp hierarchy, and subjected to abuse by both guards and fellow prisoners; most of them perished.
Obviously, gay people are going to have to write their own history. And there is enough authentic documentation on the Nazi period to undertake a first step in this direction. The words at the beginning of this article were written by one concentration camp survivor, LD Classen von Neudegg, who published some of his recollections in a German homophile magazine in the Fifties. Here are a few more excerpts from his account of the treatment of homosexuals in the concentration camp at Sachsenhausen:
“We had been here for almost two months, but it seemed like endless years to us. When we were 'transferred' here, we had numbered around three hundred men. Whips were used more frequently each morning, when we were forced down into the clay pits under the wailing of the camp sirens. 'Only fifty are sill alive,' whispered the man next to me. 'Stay in the middle -then you won't get hit so much.'
“...(The escapees) had been brought back. 'Homo' was scrawled scornfully across their clothing for their last walk through the camp. To increase their thirst, they were forced to eat oversalted food, and then they were placed on the block and whipped. Afterwards, drums were hung around their necks, which they had to beat while shouting, 'Hurrah, we're back!' The three men were hanged.
“...Summer, 1944. One morning there was an eruption of restlessness among the patients of the hospital barracks where I worked. Fear and uncertainty had arisen from rumours about new measures on the part of the SS hospital administration. At the administrator's order, the courier of the political division had requisitioned certain medical records, and now he arrived at the camp for delivery. Fever charts shot up; the sick were seized with a gnawing fear. After a few days, the awful mystery of the records was solved. Experiments had been ordered involving living subjects and phosphorus: methods of treating phosphorus burns were to be developed and tested. I must be silent about the effects of this series of experiments, which proceeded with unspeakable pain, fear, blood and tears: for it is impossible to put the misery into words.”
Dr. Neudegg's recollections are confirmed in many details by the memoirs of Rudolf Hoss, adjunct and commander of the concentration camps at Sachsenhausen and, later, Auschwitz. Neudegg's account is something of a rarity: the few homosexuals who managed to survive internment have tended to hide the fact, largely because homosexuality continued to be a crime in postwar West Germany. This is also the reason why homosexuals have been denied any compensation by the otherwise munificent West German government.
The number of homosexuals who died in Nazi concentration camps is unknown and likely to remain so. Although statistics are available on the number of men brought to trial on charges of “lewd and unnatural behaviour,” many more were sent to camps without the benefit of a trial. Moreover, many homosexuals were summarily executed by firing squads; this was particularly the case with gays in the military which encompassed nearly every able-bodied man during the final years of the war. Finally, many concentration camps systematically destroyed all their records when it became apparent that German defeat was imminent.
* * *
The beginning of the Nazi terror against homosexuals was marked by the murder of Ernst Rohm on June 30, 1934: "the Night of the Long Knives. "Rohm was the man who, in 1919, first made Hitler aware of his own political potential, and the two were close friends for fifteen years. During that time, Rohm rose to SA Chief of Staff, transforming the Brownshirt militia from a handful of hardened goons and embittered ex-soldiers into an effective fighting force five hundred thousand strong the instrument of Nazi terror. Hitler needed Rohm's military skill and could rely on his personal loyalty, but he was ultimately a pragmatist. As part of a compromise with the Reichwehr(regular army) leadership, whose support he needed to become Fuhrer, Hitler allowed Goering and Himmler to murder Rohm along with dozens of Rohm's loyal officers.
For public relations purposes, and especially to quell the outrage felt throughout the ranks of the SA, Hitler justified his blatant power play by pointing to Rohm's homosexuality. Hitler, of course, had known of Rohm's homosexuality since 1919, and it became public knowledge in 1925, when Rohm appeared in court to charge a hustler with theft. All this while the Nazi Party had a virulently antigay policy, and many Nazis protested that Rohm was discrediting the entire Party and should be purged. Hitler, however, was quite willing to cover up for him for years until he stood in the way of larger plans.
* * *
The Nazi Party came to power in 1933, and a year later Rohm was dead. While Rohm and his men were being rounded up for the massacre (offered a gun and the opportunity to shoot himself, Rohm retorted angrily: “Let Hitler do his own dirty work”), the new Chief of Staff received his first order from the Fuhrer: “I expect all SA leaders to help preserve and strengthen the SA in its capacity as a pure and cleanly institution. In particular, I should like every mother to be able to allow her son to join the SA, Party, and Hitler Youth without fear that he may become morally corrupted in their ranks. I therefore request all SA commanders to take the utmost pains to ensure that offences under Paragraph 175 are met by immediate expulsion of the culprit from the SA and the Party.”
Hitler had good reason to be concerned about the reputation of Nazi organizations, most of which were based on strict segregation of the sexes. Hitler Youth, for example, was disparagingly referred to as Homo Youth throughout the Third Reich, a characterization which the Nazi leadership vainly struggled to eliminate. Indeed, most of the handful of publications on homosexuality which appeared during the Fascist regime were devoted to new and rather bizarre methods of “detection” and “prevention.”
Rudolf Diels, the founder of the Gestapo, recorded some of Hitler’s personal thoughts on the subject: “He lectured me on the role of homosexuality in history and politics. It had destroyed ancient Greece, he said. Once rife, it extended its contagious effects like an ineluctable law of nature to the best and most manly of characters, eliminating from the reproductive process precisely those men on whose offspring a nation depended. The immediate result of the vice was, however, that unnatural passion swiftly became dominant in public affairs if it were allowed to spread unchecked.”
* * *
The tone had been set by the Rohm putsch, and on its first anniversary-June 28, 1935, the campaign against homosexuality was escalated by the introduction of the “Law for the Protection of German Blood and German Honour.” Until 1935, the only punishable offence had been anal intercourse; under the new Paragraph 175a, ten possible “acts” were punishable, including a kiss, an embrace, even homosexual fantasies! One man, for instance, was successfully prosecuted on the grounds that he had observed a couple making love in a park and watched only the man.
Under the Nazi system, criminal acts were less important in determining guilt than criminal intent. The “phenomenological” theory of justice claimed to evaluate a person's character rather than his deeds. The “healthy sensibility of the people” (gesundes Volksempfinden) was elevated to the highest normative legal concept, and the Nazis were in a position to prosecute an individual solely on the grounds of his sexual orientation. (After World War II, incidentally, this law was immediately struck from the books in East Germany as a product of Fascist thinking, while it remained on the books in West Germany.)
Once Paragraph 175a was in effect, the annual number of convictions on charges of homosexuality leaped to about ten times the number in the pre-Nazi period. The law was so loosely formulated that it could be and was applied against heterosexuals whom the Nazis wanted to eliminate. The most notorious example of an individual convicted on trumped-up charges was General Werner von Fritsch, Army Chief of Staff; and the law was also used repeatedly against members of the Catholic clergy. But the law was undoubtedly used primarily against gay people, and the court system was aided in the witch-hunt by the entire German populace, which was encouraged to scrutinize the behaviour of neighbours and to denounce suspects to the Gestapo. The number of men convicted of homosexuality during the Nazi period totaled around fifty thousand:
1933 — 853
1934 — 948
1935 — 2,106
1936 — 5,320
1937 — 8,271
1938 — 8,562
1939 — 7,614
1940 — 3,773
1941 — 3,735
1942 — 3,963
1943 (first quarter) — 966
1944-45 — ?
The Gestapo was the agent of the next escalation of the campaign against homosexuality. Ex-chicken farmer Heinrich Himmler, Reichsfuhrer SS and head of the Gestapo, richly deserves a reputation as the most fanatically homophobic member of the Nazi leadership. In 1936, he gave a speech on the subject of homosexuality and described the murder of Ernst Rohm (which he had engineered) in these terms: “Two years ago...when it became necessary, we did not scruple to strike this plague with death, even within our own ranks.”Himmler closed with these words: “Just as we today have gone back to the ancient Germanic view on the question of marriage mixing different races, so too in our judgment of homosexuality a symptom of degeneracy which could destroy our race we must return to the guiding Nordic principle: extermination of degenerates.”
* * *
A few months earlier, Himmler had prepared for action by reorganizing the entire state police into three divisions. The political executive, Division II, was directly responsible for the control of “illegal parties and organizations, leagues and economic groups, reactionaries and the Church, freemasonry, and homosexuality.”
Himmler personally favoured the immediate “extermination of degenerates,” but he was empowered to order the summary execution only of homosexuals discovered within his own bureaucratic domain. Civilian offenders were merely required to serve out their prison sentences (although second offenders were subject to castration).
In 1936, Himmler found a way around this obstacle. Following release from prison, all “enemies of the state”-including homosexuals-were to be taken into protective custody and detained indefinitely. “Protective custody” (Schutzhaft) was an euphemism for concentration camp internment. Himmler gave special orders that homosexuals be placed in Level Three camps-the human death mills described by Neudegg. These camps were reserved for Jews and homosexuals.
The official SS newspaper, Das Schwarze Korps, announced in 1937 that there were two million German homosexuals and called for their death. The extent to which Himmler succeeded in this undertaking is unknown, but the number of homosexuals sent to camps was far in excess of the fifty thousand who served jail sentences. The Gestapo dispatched thousands to camps without a trial. Moreover, “protective custody” was enforced retroactively, so that any gay who had ever come to the attention of the police prior to the Third Reich was subject to immediate arrest. (The Berlin police alone had an index of more than twenty thousand homosexuals prior to the Nazi takeover.) And starting in 1939, gays from Nazi-occupied countries were also interned in German camps.
The chances for survival in a Level Three camp were low indeed. Homosexuals were distinguished from other prisoners by a pink triangle, worn on the left side of the jacket and on the right pant leg. There was no possibility of “passing” for straight, and the presence of “marked men” in the all-male camp population evoked the same reaction as in contemporary prisons: gays were brutally assaulted and sexually abused.
* * *
“During the first weeks of my imprisonment,” wrote one survivor, “I often thought I was the only available target on whom everyone was free to vent his aggressions. Things improved when I was assigned to a labour detail that worked outside the camp at Metz, because everything took place in public view. I was made clerk of the labour detail, which meant that I worked all day and then looked after the records at the guardhouse between midnight and 2 am. Because of this 'overtime' I was allowed seconds at lunch if any food was left over. This is the fact to which I probably owe my survival...I saw quite a number of pink triangles. I don't know how they were eventually killed...One day they were simply gone.”
Concentration camp internment served a twofold purpose: the labour power of prisoners boosted the national economy significantly, and undesirables could be effectively liquidated by the simple expedient of reducing their food rations to a level slightly below subsistence. One survivor tells of witnessing “Project Pink” in his camp: “The homosexuals were grouped into liquidation commandos and placed under triple camp discipline. That meant less food, more work, stricter supervision. If a prisoner with a pink triangle became sick, it spelled his doom. Admission to the clinic was forbidden.”
This was the practice in the concentration camps at Sachsenhausen, Natzweler, Fuhlsbuttel, Neusustrum, Sonnenburg, Dachau, Lichtenberg, Mauthausen, Ravensbruck, Neuengamme, Grossrosen, Buchenwald, Vught, Flossenburg, Stutthof, Auschwitz, and Struthof; as well, lesbians wore pink triangles in the concentration camps at Butzow and Ravensbruck. In the final months of the war, the men with pink triangles received brief military training. They were to be sent out as cannon fodder in the last-ditch defense of the fatherland.
But the death of other pink triangles came much more swiftly. A survivor gives this account: “He was a young and healthy man. The first evening roll call after he was added to our penal company was his last. When he arrived, he was seized and ridiculed, then beaten and kicked, and finally spat upon. He suffered alone and in silence. Then they put him under a cold shower. It was a frosty winter evening, and he stood outside the barracks all through that long, bitterly cold night. When morning came, his breathing had become an audible rattle. Bronchial pneumonia was later given as the cause of his death. But before things had come to that, he was again beaten and kicked. Then he was tied to a post and placed under an arc lamp until he began to sweat, again put under a cold shower, and so on. He died toward evening.”
Another survivor: “One should not forget that these men were honourable citizens, very often highly intelligent, and some had once held high positions in civil and social life. During his sevenyear imprisonment, this writer became acquainted with a Prussian prince, famous athletes, professors, teachers, engineers, artisans, trade workers and, of course, hustlers. Not all of them were what one might term “respectable” people, to be sure, but the majority of them were helpless and completely lost in the world of the concentration camps. They lived in total isolation in whatever little bit of freedom they could find. I witnessed the tragedy of a highly cultured attache of a foreign embassy, who simply couldn't grasp the reality of the tragedies taking place all around him. Finally, in a state of deep desperation and hopelessness, he simply fell over dead for no apparent reason. I saw a rather effeminate young man who was repeatedly forced to dance in front of SS men, who would then put him on the rack-chained hand and foot to a crossbeam in the guardhouse barracks and beat him in the most awful way. Even today I find it impossible to think back on all my comrades, all the barbarities, all the tortures, without falling into the deepest depression. I hope you will understand.”
The ruthlessness of the Nazis culminated in actions so perversely vindictive as to be almost incomprehensible. Six youths arrested for stealing coal at a railroad station were taken into protective custody and duly placed in a concentration camp. Shocked that such innocent boys were forced to sleep in a barracks also occupied by pink triangles, the SS guards chose what to them must have seemed the lesser of two evils: they took the youths aside and gave them fatal injections of morphine. Morality was saved.
The self-righteousness that prompted this type of action cuts through the entire ideology glorifying racial purity and extermination of degenerates to reveal stark fear of homosexuality. Something of this fear is echoed in the statement by Hitler cited above, which is quite different in tone from the propagandistic cant of Himmler’sexhortations. Himmler saw homosexuals as congenital cowards and weaklings. Probably as a result of his friendship with Rohm, Hitler could at least imagine “the best and most manly of characters” being homosexual.
Hitler ordered all the gay bars in Berlin closed as soon as he came to power. But when the Olympics were held in that city in 1936, he temporarily rescinded the order and allowed several bars to reopen: foreign guests were not to receive the impression that Berlin was a “sad city.”
Despite, and perhaps because of, their relentless emphasis upon strength, purity, cleanliness and masculinity, the all-male Nazi groups surely contained a strong element of deeply repressed homoeroticism. The degree of repression was evidenced by the Nazi reaction to those who were openly gay. In the Bible, the scapegoat was the sacrificial animal on whose head the inchoate guilt of the entire community was placed. Homosexuals served precisely this function in the Third Reich.
The ideological rationale for the mass murder of homosexuals during the Third Reich was quite another matter. According to the doctrine of Social Darwinism, only the fittest are meant to survive, and the law of the jungle is the final arbiter of human history. If the Germans were destined to become the master race by virtue of their inherent biological superiority, the breeding stock could only be improved by the removal of degenerates. Retarded, deformed and homosexual individuals could be eliminated with the dispassionate conscientiousness of a gardener pulling weeds. (Indeed, it is the very vehemence and passion with which homosexuals were persecuted that compels us to look beyond the pseudoscientific rationale for a deeper, psychological dynamic.)
* * *
The institutionalized homophobia of the Third Reich must also be seen in terms of the sexual revolution that had taken place in Germany during the preceding decades. The German gay movement had existed for thirtysix years before it (and all other progressive forces) was smashed. The Nazis carried out a “conservative revolution” which restored law and order together with nineteenth-century sexism. A system of ranking women according to the number of their offspring was devised by Minister of the Interior Wilhelm Frick, who demanded that homosexuals “be hunted down mercilessly, for their vice can only lead to the demise of the German people.”
Ironically, the biologistic arguments against gay people could be supported by the theories advanced by the early gay movement itself. Magnus Hirschfeld and the members of the Scientific-Humanitarian Committee had made “the Third Sex” a household term in Germany; but the rigidly heterosexual society of the Third Reich had no patience with “intersexual variants” and turned a deaf ear to pleas for tolerance. The prominent Nazi jurist Dr. Rudolf Klare wrote: “Since the Masonic notion of humanitarianism arose from the ecclesiastical/Christian feeling of charity, it is sharply opposed to our National Socialist worldview and is eliminated a priori as a justification for not penalizing homosexuality.”Sources: The Body Politic, Issue 11, January/February 1974. People with a History: An Online Guide to Lesbian, Gay, Bisexual and Trans History.
Hess, Homosexuality and the Third Reich
In "British Files Hide Truth About Hess Plot" (letter, June 21), Wolf Rudiger Hess, the son of Rudolf Hess, alleges that ascribing homosexuality to his father was one of the K.G.B.'s "last-ditch efforts to humiliate the Third Reich."
This is really too much. The Third Reich humiliated itself with unprovoked aggression, unparalleled barbarity, war crimes of every sort, genocide and, last of all, the extermination of at least 10,000 homosexuals, 90 percent of whom were citizens of the Reich. How then could it be further humiliated by the proof that one of its leaders was himself involved in homosexual affairs? Is homosexuality a greater disgrace than the perpetration of the Holocaust?
As it happens, many sources mention Rudolf Hess as an habitue of the gay subculture of Berlin in the Weimar era. The one author whom Wolf Hess names, Kurt G. W. Ludecke, in his 1938 book, "I Knew Hitler," says (page 586) that "I couldn't quite see the epithet of 'Fraulein,' for he was virility itself." Robert G. L. Waite, in "The Psychopathic God: Adolf Hitler," 1977, declares (page 235) that "Hitler was closely associated with Ernst Rohm and Rudolf Hess, two practicing homosexuals who were among the very few people with whom he used the familiar du ."
And the Italian gay activist Massimo Consoli, in his 1984 "Homocaust" (page 39) cites authors who allude to the homosexuality not just of Rudolf Hess, but also of many of the early Nazi Party leaders, disclosed by "Ernst Testis" in "Das Dritte Reich Stellt Sich Vor" of 1933 (pages 100-12). All these authors wrote wholly independent of the K.G.B. and its eventual efforts at "disinformation."
The sexual orientation of Rudolf Hess should not affect the judgment of historians on him; but to interpret the disclosure as "humiliation" is preposterous. WARREN JOHANSSON WILLIAM A. PERCY Boston, June 22, 1991 The writers are co-authors of "Homosexuals and the Holocaust," an article in the Simon Wiesenthal Center Annual (volume 7).
Nazi Persecution of Homosexuals
While male homosexuality remained illegal in Weimar Germany under Paragraph 175 of the Criminal Code, German homosexual-rights activists became worldwide leaders in efforts to reform societal attitudes that condemned homosexuality. After the First World War, a rich gay sub-culture developed in Berlin - strong enough to attract homosexuals from abroad. Until 1933 there existed simultaneously over one hundred gay and lesbian pubs, and a variety of gay, lesbian and transsexual magazines were published.
But many in Germany regarded the Weimar Republic's toleration of homosexuals as a sign of Germany's decadence. The Nazis posed as moral crusaders who wanted to stamp out the "vice" of homosexuality from Germany in order to help win the racial struggle. Once they took power in 1933, the Nazis intensified persecution of German male homosexuals. Persecution ranged from the dissolution of homosexual organizations to internment in concentration camps. From now on homosexual men were persecuted as "state enemies" and labelled as a "infection risk".
The Nazis believed that male homosexuals were weak, effeminate men who could not fight for the German nation. They saw homosexuals as unlikely to produce children and increase the German birthrate. The Nazis held that inferior races produced more children than "Aryans," so anything that diminished Germany's reproductive potential was considered a racial danger. In order to promote the heterosexual ideal, the Nazi government under Göring provided quick promotion for civil servants who married early, and "Matrimonial Credits" were issued to women as an economic incentive to procreation.
During the Nazi regime, the police had the power to jail indefinitely --without trial-- anyone they chose, including those deemed dangerous to Germany's moral fiber.
At first, in the Third Reich, homosexuals were not persecuted in a systematic way or just because of their sexual orientation. Fines of DM175 were handed out for sexual "deviancy", which included kisses, flirts and ambiguous touches among men. As long as gays were ready to give up their love lives or agree to a fictitious marriage they were relatively safe. Lesbian women - with the exception of Austria - were not prosecuted during the Nazi era. In the concentration camps they were - contrary to gays - not registered as a special group of prisoners and thus can only be identified from the records with difficulty.
Among the personal responses to the growing police attention to individual homosexual's lives was the "protective marriage" to give the appearance of conformity. Paul Otto (left) married the woman behind him with her full knowledge that his long-time partner was Harry (right). Berlin, 1937.
Private Collection, Berlin
United States Holocaust Memorial Museum #073
SS chief Heinrich Himmler directed the increasing persecution of homosexuals in the Third Reich. Lesbians were not regarded as a threat to Nazi racial policies and were generally not targeted for persecution. Similarly, the Nazis generally did not target non-German homosexuals unless they were active with German partners. In most cases, the Nazis were prepared to accept former homosexuals into the "racial community" provided that they became "racially conscious" and gave up their lifestyle.
From 1919 to 1933
During the Weimar Republic the civil rights for gays and lesbians movement, which had been founded during the period of the German Empire, grew in strength. In 1929 the Law Committee of the Reichstag (Parliament) recommended the abolition of the law relating to punishment for homosexual acts between adults. However, the increase in votes for the Nazis and the crisis of the Weimar Republic prevented the carrying out of the Committee's decision.
Within a month after Hitler took the power at the end of January 30, 1933, the new Nazi minister of interior issued an order to close all gay bars, and also forbade "obscene literature", condemning homosexuals as "socially aberrant." As part of the Nazis' attempt to purify German society and propagate an "Aryan master race," Brownshirted storm troopers raided the institutions and gathering places of homosexuals. The first gay and transgender men were sent in the autumn of this same year to the newly built concentration camps.
- January 30: The National Socialist (Nazi) Party, led by Adolf Hitler, takes power.
- February 22: Prostitution was banned.
- February 23: The Prussian Minister of the Interior orders the closing of the restaurants and pubs "in which, by serving as meeting places, the practice of unnatural sex-acts is encouraged". Gay and lesbian pubs were closed down. Police closed bars and clubs such as the "Eldorado" and banned publications such as Die Freundschaft(Friendship). In this early stage the Nazis drove homosexuals underground, destroying their networks of support.
- March 3: Nudism was banned.
- March 7: Pornography was banned.
- March 17: The West German Morality League began its Campaign against Homosexuals, Jews, Negroes and Mongols. The first male homosexuals are sent to concentration camps.
- On May 6 the students of the Gymnastic Academy, led by Storm Troopers (Sturmabteilung; SA), looted Magnus Hirschfeld's "Institute for Sexual Sciences". They poured bottles of ink over the manuscripts, terrified the staff, and threw the journals out of the windows The next day SA troops arrived to cart away two lorry-loads of books, and the building was requisitioned for the use of the Nazi Association of German Jurists and Lawyers. Hirschfeld's citizenship was revoked, and mobs carried his effigy in anti-gay/anti-Semite demonstrations.
Four days later, as part of large public burnings of books viewed as "un-German," most of this collection of over 12,000 books and 35,000 irreplaceable pictures was thrown into a huge bonfire along with thousands of other "degenerate" works of literature, such as the works of writers like Berthold Brecht, Thomas and Heinrich Mann and Franz Kafka in the book burning in Berlin's city center, on the square between the former Royal Library and Berlin's Opera House (now known as Bebelplatz.)
Magnus Hirschfeld, the founder of the Institute and a pioneer in the scientific study of human sexuality, was lecturing in France at the time and chose not to return to Germany.
- In July the gay rights activist Kurt Hiller was arrested and sent to Orienburg concentration camp, where for nine months he was on the verge of death due to brutal mistreatment, until he was released and sent into exile. In a speech in 1921 he had addressed gay men: "In the final analysis, justice for you will only be the fruit of your own efforts. The liberation of homosexuals can only be the work of homosexuals themselves."
- November 13: the Hamburg City Administration asked the Head of Police to "pay special attention to transvestites and to deliver them to the concentration camps if necessary."
The legal provisions to arrest "sex criminals" were broadened. Without any attempt to produce legal proof, many SA leaders were murdered in the summer of 1934 (June 30, "The Night of the Long Knives"), among them their chief of staff, Hitler's buddy Ernst Röhm. As official reason was given that the regime wanted to clean society of such dens of sexual debauchery. The same year, a special Gestapo (Secret State Police) section for "homosexual crimes" was set up.
Rudolf Diels, the founder of the Gestapo (secret state police), in 1934 lectured his colleagues on how homosexuals had caused the downfall of ancient Greece. He, recorded some of Hitler's personal thoughts on the subject:
"He lectured me on the role of homosexuality in history and politics. It had destroyed ancient Greece, he said. Once rife, it extended its contagious effects like an ineluctable law of nature to the best and most manly of characters, eliminating from the reproductive process precisely those men on whose offspring a nation depended. The immediate result of the vice was, however, that unnatural passion swiftly became dominant in public affairs if it were allowed to spread unchecked."
According to Nazi propaganda, both homosexuals and Jews destroyed the so-called masculinity and purity of the German nation; both homosexuals and Jews are characterized by perverse and degenerate sexualities. In 1934, the Reich Ministry of Justice emphasized that "it is precisely Jewish and Marxist circles which have always worked with special vehemence for the abolition of §175." In effect, Jews and homosexuals were portrayed as collaborators in the corruption of the German nation.
24th October - Heinrich Himmler orders all police stations and police authorities, to draw up a list of all persons who have, in any way, been homosexually active. The lists are to be sent to the Secret Police Headquarters in Berlin. A special department for homosexuality is set up there at the end of October. Police in many parts of Germany had been compiling these lists of suspected homosexual men since 1900. The Nazis used these "pink lists" to hunt down individual homosexuals during police actions.
In 1934, 766 gay men were convicted and imprisoned.
26th June - Changes in the "Law for the prevention of Children with inherited Diseases", also makes possible the castration of "political-criminal homosexual males". In order to avoid prison or concentration camp many homosexuals who had been sentenced to a jail term are forced to agree to "voluntary" castration. (From 1942 onwards, in concentration camps castration without the consent of the victim is legalised.)
The tone had been set by the Röhm putsch, and on its first anniversary - June 28, 1935, the Ministry of Justice decidet that § 175 had to be revised. The revisions provided a legal basis for extending Nazi persecution of homosexuals. Ministry officials expanded the category of "criminally indecent activities between men" to include any act that could be construed as homosexual. The courts later decided that even intent or thought sufficed.
On September 1, 1935, a harsher, amended, and broadened version of § 175 of the Criminal Code, originally framed in 1871, to close what were seen as loopholes in the current law, went into effect, punishing a broad range of "lewd and lascivious" behavior between men.
A law was passed requiring the sterilization of all homosexuals, schizophrenics, epileptics, drug addicts, hysterics, and those born blind or malformed. By 1935, 56,000 people were thus "treated." In actual practice, the homosexuals were literally castrated rather than sterilized. In 1935 all local police departments were required to submit to the Gestapo lists of suspected homosexuals; shortly there were 20,000 names on the index.
The campaign against homosexuality was escalated by the introduction of the "Law for the Protection of German Blood and German Honour." Until 1935, the only punishable offence had been anal intercourse; under the new § 175a, ten possible "acts" were punishable, including a kiss, an embrace, even homosexual fantasies! One man, for instance, was successfully prosecuted on the grounds that he had observed a couple making love in a park and watched only the man.
After the expansion of penalties under §175 in 1935, Himmler spoke triumphantly about the purity of the German nation:
"Just as we today have gone back to the ancient Germanic view on the question of marriage mixing different races, so too in our judgment of homosexuality - a symptom of degeneracy which could destroy our race - we must return to the guiding Nordic principle: extermination of degenerates. Germany stands and falls with the purity of the race."A 1935 propaganda campaign and two show trials in 1936 and 1937 alleging rampant homosexuality in the priest-hood, attempted to undercut the power of the Roman Catholic Church in Germany, an institution which many Nazi officials considered their most powerful potential enemy.
On October 26, 1936, Nazi leader Heinrich Himmler formed within the Security Police a "Reich Central Office to Combat Homosexuality and Abortion": Special Office (II S), a subdepartment of Executive Department II of the Gestapo. Its task is to gather information and lead an effective fight against both forms of the "population-plague". The linking of homosexuality and abortion reflected the Nazi regime's population policies to promote a higher birthrate of its "Aryan" population.
Josef Meisinger, executed in 1947 for his brutality in occupied Poland, led the new office. The police had powers to hold in protective custody or preventive arrest those deemed dangerous to Germany's moral fiber, jailing indefinitely --without trial-- anyone they chose. In addition, homosexual prisoners just released from jail were immediately re-arrested and sent to concentration camps if the police thought it likely that they would continue to engage in homosexual acts.
Also in 1936, as part of the clean-up campaign preparatory for the Olympics, homosexual meeting places were raided in Hamburg and on one night alone 80 homosexuals were brought to Concentration Camp Fuhlsbuuml;ttel.
But when the Olympics were held in Berlin, Hitler 1936, he temporarily rescinded the order and allowed several bars to reopen: foreign guests were not to receive the impression that Berlin was a "sad city."
In 1936, 4,000 gay men were convicted and imprisoned.
On February 18, 1937, SS leader Heinrich Himmler gave his infamous lecture in Bad Tölz, before a group of highranking SS officers. He spoke on the homosexual danger implying that it could menace through infection the homosocial institutions of the Nazi regime.
Himmler's Division II was responsible for the control of "illegal parties and organizations, leagues and economic groups, reactionaries and the Church, freemasonry, and homosexuality." Even after serving their prison sentences, such "enemies of the state" were taken into "protective custody" (Schutzhaft) - a euphemism for internment in concentration camps.
Under the revised Paragraph 175 and the creation of Special Office II S, a subdepartment of Executive Department II of the Gestapo, the number of prosecutions increased sharply. From 1937 to 1939, the peak years of the Nazi persecution of homosexuals, the police increasingly raided homosexual meeting places, seized address books, and created networks of informers and undercover agents to identify and arrest suspected homosexuals. Half of all convictions for homosexual activity under the Nazi regime occurred during 1937 - 1939.
The official SS newspaper, Das Schwarze Korps, announced in 1937 that there were two million German homosexuals and called for their death. The extent to which Himmler succeeded in this undertaking is unknown, but the number of homosexuals sent to camps was far in excess of the fifty thousand who served jail sentences. The Gestapo dispatched thousands to camps without a trial. Moreover, "protective custody" was enforced retroactively, so that any gay who had ever come to the attention of the police prior to the Third Reich was subject to immediate arrest. (The Berlin police alone had an index of more than twenty thousand homosexuals prior to the Nazi takeover.)
From 1937 onwards many of the gay men previously arrested, were sent to concentration camps after they had served their "regular" prison sentence.
Reich Legal Director Hans Frank in 1938 issued orders for more rigorous surveillance:
"Particular attention should be addressed to homosexuality, which is clearly expressive of a disposition opposed to the normal national community. Homosexual activity means the negation of the community as it must be constituted if the race is not to perish. That is why homosexual behaviour, in particular, merits no mercy."The police stepped up raids on homosexual meeting places, seized address books of arrested men to find additional suspects, and created networks of informers to compile lists of names and make arrests
On April 4, the Gestapo issued a directive indicating that men convicted of homosexuality could be incarcerated in concentration camps.
Major gay scandals involving the army (false accusations against the highest officer in charge, general von Fritsch, in 1938).
It should be noted that Nazi authorities sometimes used the charge of homosexuality to discredit and undermine their political opponents. Nazi leader Hermann Göring used trumped-up accusations of homosexual improprieties to unseat army supreme commander Von Fritsch, an opponent of Hitler's military policy, in early 1938.
In 1938, 8,000 gay men were convicted and imprisoned.
Also in 1938, because of his alliance with Hitler, Mussolini began to persecute homosexuals. Several thousand were exiled to prisons, some in the Lipari islands; others were deprived of their posts and remanded to small provincial towns. There is no evidence, however, that the Fascist regime ever killed any homosexuals. Ironically, in 1930 Mussolini had intervened in a parliamentary debate to prevent the inclusion of an article in the penal code criminalizing homosexual conduct on the grounds that it was "rare among Italians and practiced only by decadent foreigners" who should not be driven out of the country because they contributed to Italy's supply of foreign exchange.
After the beginning of the war in 1939, it was decided that no prisoner would be released from concentration camps. Persons who were considered to endanger the social body could be exterminated.
Gay men in most occupied countries suffered little from the Nazis because they did not endanger the German race (except when they seduced German soldiers). In this case they were interned in German Level Three camps. The chances for survival in a Level Three camp were low indeed.
Homosexuals were distinguished from other prisoners by a pink triangle, worn on the left side of the jacket and on the right pant leg. There was no possibility of "passing" for straight, and the presence of "marked men" in the all-male camp population evoked the same reaction as in contemporary prisons: gays were brutally assaulted and sexually abused.
12th July - Himmler orders that all homosexuals sentenced under § 175, "who have seduced more than one partner", should be taken into "preventive detention" after they are released from prison". In reality that means they are sent to a concentration camp. Those incarcerated there for § 175 offences are forced to wear a pink triangle in order to make them identifiable.
15th November - In a Decree of the Führer for the Cleansing of the SS (Secret State Police) and the police force, Hitler orders the death penalty for homosexual activity by members of the SS and Police.
19th May - The head of the General Staff of the German Army (Wehrmacht), General Keitel, issues a decree laying down "regulations for the punishment of unnatural sexual acts." In "special difficult cases" the death penalty should be ordered.
The Danish SS-Doctor Carl Vaernet carried out medical experiments on homosexuals in Buchenwald concentration camp. He wanted to "cure" homosexuals by implanting artificial hormone glands in the region of the upper leg.
8th May - The war comes to an end. The concentration camps are liberated. In distinction to other Nazi laws, the Allies do not withdraw the Nazi version of § 175. Some homosexuals who had been liberated are required to serve the remainder of their sentence in a "normal" prison. § 175 remains in force in the Federal Republic of Germany (West!) until 1969. The German Democratic Republic (East!) adopts the "milder" pre-Nazi version.
As police raids on homosexuals began and became widespread, more and more gays were identified and charged. A man suspected of violating § 175 would be questioned, phographed and often softened up with the use force. Under extreme pressure and violent interrogation those arrested would then be forced to give the names and addresses of other homosexuals known to them.
a night of love'
Gay survivor, Gad Beck
Survivor Gad Beck, speaking about his arrest (taken from the documentary 'Paragraph 175')
Often the Gestapo would have raided the house of a homosexual on arrest and found an address book that would have lead them to other violators of § 175.
Statements and confessions were signed under intense, often physical pressure and once a signature was obtained the arrestee would be charged. As most 'confessed' to their crime, few were given a fair hearing or the chance to fight their case in a court of law.
At the point of arrest suspects were given no chance to return home or the opportunity to communicate with their family about their whereabouts. Many did not see their families again until after their liberation from concentration camps or after completion of prison terms.
Homosexuals charged under § 175 were held in so called schutzhaft or 'protective custody' at a variety of prisons and detention centres including Waldheim prison and Fuhlsbutter prison.
Gay survivor Heinz F.
speaking about his arrest (taken from the documentary 'Paragraph 175')
The first special center to house criminals and homosexuals was erected in 1933 at Dachau, southern Germany, which was largely seen as the prototype for further camps. The Sachsenhausen camp opened in 1936 to eventually house more than 200,000 prisoners, including many homosexuals.
When the huge network of concentration camps were in place throughout Germany and occupied Europe, many arrested homosexuals found themselves deported straight from the police custody, without any chance of trial. Violators of §175 were then held mainly at Auschwitz- Birchenau, Treblinka, Flossenburg, Neuengamme and Schirmeck, although the Dachau and Sachsenhausen camps still continued to take homosexuals.
From 1935 men convicted under § 175 could 'volunteer' to undergo castration in order to "free themselves" from their "degenerate sex drive." This was the idea of SS doctor Himmler. Many homosexuals agreed to the operation believing that they would then be set free. After the operation many were then simply re-arrested as they were still thought to be a degenerate risk to the purity of the Reich.
While gay men made up the majority of homosexual victims, lesbians were by no means saved from persecution. Although § 175 made no reference to lesbianism, the Third Reich had no place for women who could not reproduce and further the Aryan race. Homosexual men were regarded as largely degenerate and dangerous impurities to the Reich, where as all women were regarded as 'passive' and in need of men. Generally lesbianism was regarded as a non-permanent state resulting from confused friendships rather than a systematic threat.
The Nazis outlawed and closed all lesbian bars, groups and publications. Police were encouraged to raid known lesbian meeting places creating a climate of fear. This forced many women to break off friendships and to meet in secret. Some escaped possible persecution by entering into marriages with homosexual friends as a form of cover, others moved to different towns where they could pass unrecognised. Survivor Annette Eick, b 1909, escaped to the UK on a false papers that she had secured from a woman she had met at a lesbian bar. As a Jewish Lesbian, she would have almost certainly been persecuted had she stayed in Berlin.
In some cases police arrested and charged lesbians as 'prostitutes' or 'asocials', which would certainly have lead to prosecution and possible deportation, but there is no evidence to show that women were ever made to wear the pink triangle.
There are documented cases of lesbians being held at the German camp Ravesnbruck.
One woman, Henny Schermann, b 1912, was arrested in 1940 in Frankfurt and was labelled 'licentious Lesbian' on her police mug shot. Also identified as a 'stateless Jew', she was deported to Ravensbruck concentration camp, where two years later she was selected for extermination and gassed at the Bernburg psychiatric hospital.
Breaking the Silence
Breaking the silence
After the 'liberation' some survivors including Karl Gorath, bravely struggled for recognition through the courts in post-war society. However, with Paragraph 175 still in place, many survivors tried to put their experiences behind them fearing further persecution.
I've spoken about it before. I don't want to anymore. That's in the past for me'.
Gay survivor, Karl Gorath
Unwilling to go quietly into the night, one survivorbegan writing down his painful memories onto paper. The result was the powerful 'Männer mit dem rosa Winkel ' ('Man with the Pink Triangle'). First published in 1971, the German book opened the lid on a part of history that had remained hidden for so long. The Austrian survivor chose to remain anonymous fearing possible repercussions, instead relating his experiences to the German writer Heinz Heger. The book was later translated into English and republished in 1980 as'The Men With the Pink Triangle', when it received wider recognition.
Other survivor books soon followed, including 'The Pink Triangle' by Robert Plant; 'An Underground Life: Memoirs of a Gay Jew in Nazi Berlin' by Gad Beck; 'Liberation was for Others, Memoires of a Gay Survivor of the Nazi Holocaust' by Pierre Seel (Originally published in French as 'Moi Pierre Seel, déporté Homosexuel'); and 'Damned Strong Love: The True Story of Willi G. and Stephan K'by Lutz Van Dijk. (A full list of these titles can be found in the resources section of this site).
Martin Sherman's 1979 award-winning play 'Bent' brought the suffering of gay men in Nazi Europe to the stage, and furthered awareness to the subject. A film version followed in 1997. The play was followed by the release of two documentaries featuring survivor testimonies: 'Desire' (directed by Stuart Marshall, 1989), & then 'We Were Marked With A Big 'A'' (directed by Elke Jeanrod & Josef Weishaupt, 1990).
Historians began to research the Nazi persecution of homosexuals extensively, among them German-born Dr. Klaus Mueller, who has produced many articles on the subject. In 1995 he helped & encouraged eight survivors to issue a collective declaration demanding judicial & moral recognition of their persecution. The declaration read:
'Declaration of gay survivors 50 years after their liberation'
"50 years ago, Allied troops did liberate us from Nazi concentration camps & prisons. But the world we had hoped for did not happen to come true. We were forced to hide again & faced on-going persecution under the same Nazi-law that was on the books since 1935 & stayed on the books until 1969. Raids were frequent. Some of us - just tasting their new freedom - were even sentenced to long-term prison again.
Although some of us tried courageously to gain recognition by challenging the courts up to the West German Supreme Court, we were never acknowledged as being persecuted by the Nazi regime. We were excluded from financial compensations for the victims of the Nazi regime. We lacked the moral support & sympathy of the public. No SS-man ever had to face a trial for the murder of a gay man in or outside the camps. But whereas they now enjoy a pension for their 'work' in the camps, our years in the camps are subtracted from our pension.
Today we are too old & tired to struggle for the recognition of the Nazi injustice we suffered. Many of us never dared to testify. Many of us died alone with their hunting memories. We waited long, but in vain for a clear political & financial gesture of the German government & courts. We know that still very little is taught in schools & universities about our fate. Even Holocaust museums & memorials many times don't mention the Nazi persecution of homosexuals.
Today, 50 years later, we turn to the young generation & to all of you who are not guided by hate & homophobia. Please support us in our struggle to memorize & document the Nazi atrocities against homosexual men & lesbian women. Let us never forget the Nazi atrocities against Jews, Gypsies, Jehovah's witnesses, Freemasons, the disabled, Polish & Russian prisoners of war & homosexuals.
let us learn from the past & let us support the young generation of lesbian women & gay men, girls & boys to lead unlike us a life in dignity & respect, with their loved ones, their friends & their families."
In 1999 the groundbreaking documentary 'Paragraph 175', directed by Rob Epstein & Jeffrey Friedman brought together the testimonies of eight survivors. Although not the first documentary on the subject, itwas largely regarded as the most comprehensive. Winning several awards, including the Sundance Film Festival Grand Jury Prize for 'Best Documentary Direction', the film became instrumental in raising awareness & was cited in the eventual apology & recognition for gay victims by the German government.
Recognition did eventually come but late for many of gay victims & survivors, who lived the rest of their lives as criminals in the eyes of the law. While memorials remember the many other victims of the Holocaust, it was 54 years before one included the gay victims. In January 1999 Germany finally held its first official memorial service for the homosexual victims at the former Sachsenhausen concentration camp."We all know that our decisions today are more than 50 years late, they are necessary nonetheless.
We owe it to the victims of wrongful Nazi justice."
German justice minister Hertha Daeubler-Gmelin, May 17th 2002
Appology at last
However, it wasn't until December 2000 that an actual apology came. The German government issued an apology for the prosecution of homosexuals in Germany after 1949 & agreed to recognise gays as victims of the Third Reich. Survivors were finally encouraged to come forwards & claim compensation for their treatment during the Holocaust (although claims had to be registered before the end of 2001).
The Geneva-based aid agency, International Organisation for Migration (IOM) was responsible for the introducing & handling the claims.
On May 17th 2002, the process was completed as thousands of homosexuals, who suffered under the Reich, were officially pardoned by the German government. About 50,000 gay men were included. German justice minister Hertha Daeubler-Gmelin told parliament, "We all know that our decisions today are more than 50 years late, they are necessary nonetheless. We owe it to the victims of wrongful Nazi justice."
After the many years of ongoing persecution and Nazi terror, the freedom dreams of many concentration camp prisoners finally came true when, in 1944, the liberation began. The German army were being defeated throughout the Third Reich and as the Allies approached many camps were evacuated. Those left behind were left bewildered and confused.
On July 24th, 1944, the Soviet Red Army arrived at the Maidanek camp and liberated those inside. Other camps soon followed with the arrival of various Allied troops, although the largest of the death camps - Auschwitz - was not liberated until January 27th 1945. World War II ended on May 7th, 1945, when Nazi Germany finally surrendered to the Allied forces.
After the camps were liberated and the plight of the Jewish victims acknowledged worldwide, the persecution of homosexuals continued throughout post-war Germany. While many survivors were rebuilding their lives and families initially in displaced persons camps, homosexuals faced further persecution and social exclusion. In fact many pink triangle survivors were re-imprisoned under § 175, with time spent in concentration camps deducted from their pensions. Time spent in the camps contributed to their continued sentences that were then completed in prisons.
While other victims of the Holocaust received compensation for loss of family and loss of education, homosexuals remained deviants in the eyes of post-war society. In fact in Germany many more men were prosecuted under § 175 in the years immediate to the Nazi regime.
The gay survivors who were liberated (i.e. not subject to further prison terms) often found themselves ostracized from society. Some were not welcomed back to their homes in the aftermath of war for the 'shame' they had brought on their family's reputation. Those that did return often kept their experience to themselves fearing that the sensitive nature of the horrors would bring further distress to family members. Some never spoke out about their suffering.
Gay survivor Heinz Heger (pseudonym)
In the post-war years many homosexuals tried to restart their battered lives; some entered into marriage; others struggled to find anonymity in their communities; some even entered into the armed forces. The stigma of the pink triangle was clearly a heavy burden and, without the support and contact of gay friends who were either in hiding or dead themselves, many survivors lived with the silent 'shame' of their experience in secret.
The fight for Justice
With § 175 still in place, many survivors tried hard to put their experiences behind them fearing further persecution. However, after the 'liberation' some survivors did bravely struggle for recognition through the courts. Survivors such as Karl Gorath, Heinz Dörmer, and Pierre Seel, fought many years for retribution for their imprisonment. Goraths' attempts at legal reparations were rejected both in 1953 and 1960. Pierre Seel refused to give up and continued fighting throughout the 1980's and 1990's.
In the 1945Nuremberg war crime trials that followed the liberation no mention was ever made of crimes against homosexuals. No SS official was ever tried for specific atrocities against pink triangle prisoners. Many of the known SS Doctors, who had performed operations on homosexuals, were never brought to account for their actions. One of the most notorious SS doctors was Carl Peter Vaernet who performed numerous experiments on pink triangle inmates at the Buchenwald and Neuengamme camps. He was never tried for his crimes and escaped to South America where he died a free man in 1965.
Many of the pink triangle survivors were never recognised as victims of the Holocaust during their lives and never lived to be repatriated. For those who continued to fight, it would be many years before their efforts paid off.
Artist Richard Grune, who trained at the Bauhaus school in Weimar under teachers including Paul Klee andWassily Kandinsky, moved to Berlin in February 1933.
Richard Grune, ca. 1922.
Walter Gropius, Bauhaus Dessau, 1925–26. View from the Northwest. Photo Lucia Moholy, 1926. Bauhaus-Archiv Berlin. © 2002 Artists Rights Society (ARS), New York/ VG Bild-Kunst, Bonn
Between 1933 and 1945, Germany's Nazi government under Adolf Hitler attempted to rid German territory of people who did not fit its vision of a "master Aryan race."Grune and other homosexuals in Germany felt the impact of the new regime within weeks of Adolf Hitler's appointment as chancellor in January 1933.
In February, police and Storm Troopers began enforcing orders to shut same–sex bars and clubs. During acrackdown over the next several months, most gathering places for homosexual men and women were closed down, fundamentally disrupting their public lives. Grune was arrested in December 1934, one of 70 men caught in a wave of related denunciations.
Under interrogation, Grune admitted to being homosexual. He was held in "protective custody" for five months, then returned to his childhood home on the German–Danish border to stand trial for violatingParagraph 175. In September 1936, Grune was convicted and sentenced to prison for one year and three months, minus time already served in protective custody. It is estimated that some 50,000 men served prison terms as convicted homosexuals.
At his release, the Gestapo returned Grune to protective custody, asserting that the sentence had been too lenient. In early October 1937, Grune was sent to the Sachsenhausen concentration camp, where he remained until being transferred to the Flossenbürg camp in early April 1940.
World War II helped to conceal the Nazis' radicalized persecution at home. Thousands of homosexuals were sent to forced labor camps. There, in an explicit campaign of "extermination through work," homosexuals and other so-called security suspects were assigned to grueling work in ceaselessly dangerous conditions.
Grune himself remained in the Flossenbürg camp until 1945. As American forces approached, he escaped the evacuation of Flossenbürg and joined his sister in Kiel. He spent much of the rest of his life in Spain, but later returned to Kiel, where he died in 1983.
Richard Grune's desire to bring attention to the terror of the concentration camps led to the 1947 publication of a limited–edition portfolio of his lithographs. His work generally reflects what he experienced at the Sachsenhausen and Flossenbürg concentration camps; some images are based on information from other survivors. The portfolio is among the most important visual recordings of the daily nightmare of the Nazi concentration camps to appear in the immediate postwar years.
Excerpts from Vera Laska
[The following is exerpted from Vera Laska (ed.), Women in the Resistance and in the Holocaust: The Voices of Eyewitnesses, Greenwood Press, Westport & London, 1983. and was posted by Kenneth McVay to: Newsgroups: alt.revisionism,soc.history Subject: LEST WE FORGET: The Fate of the Homosexuals Followup-To: alt.revisionism Organization: The Old Frog's Almanac, Vancouver Island, CANADA Keywords: Auschwitz,Dirlwanger,Flossenburg,homosexual,Ravensbruck]
"A few remarks are in order about the various sex relationships in concentration camps. I start out with the homosexuals since they were accorded a special category among the inmates and `merited' a separate, pink triangle. ...
Very little has been written about the tens of thousands of homosexuals who were the damnedest of the damned, the outcasts among the outcasts in the concentration camps. There are really only estimates of figures. During the twelve years of Nazi rule, nearly 50,000 were convicted of the crime of homosexuality. The majority ended up in concentration camps, and virtually all of them perished. According to a recent study, `at least 500,000 gays died in the Holocaust.' As Stefan Lorant observed in 1935, the homosexuals `lived in a dream', hoping that the heyday of gays in Germany of the 1920s would last forever. Their awakening was terrible. Yet,the few survivors among them did not qualify for postwar restitution as the Jews or the politicals, because as homosexuals they were outside the law. By German law, homosexuality was a crime. After the prison sentences most homosexuals were automatically shipped to concentration camps. In 1935, a new law legalized the `compulsory sterilization (often in fact castration) of homosexuals.' A special section of the Gestapo dealt with them.Along with epileptics, schizophrenics and other `degenerates', they were being eliminated. Yet homosexuality was still so widespread that in 1942 the death penalty was imposed for it in the army and the SS.
In concentration camps, some pink triangles became concubines of male kaposor other men in supervisory positions among the inmates. They were known as doll boys; this brought them certain protection while the love affair lasted. The pink triangles were constantly abused by the SS, camp officials and fellow prisoners. They were seldom called other names than arse-holes,shitty queers or bum-fuckers. They were allowed to talk only to each other,they had to sleep with the lights on and with hands above their blankets. These people were not child molesters; those were considered professional criminals, green triangles.
While men with pink triangles were given the hardest jobs and were being constantly abused for their admitted sexual prference, considerable numbers of `normal' men engaged in homosexual acts with impunity -- that was an emergency outlet. This double standard was an additional psychological burden for the pink triangles.
The SS considered it great sport to taunt and torture the homosexuals. The camp commander at Flossenburg often ordered them flogged; as the victims were screaming, he `was panting with excitement, and masturbated wildly in his trousers until he came,' unperturbed by the hundreds of onlookers.
A sixty-year-old gay priest was beaten over his sexual organs by the SS and told: `You randy old rat-bag, you can piss with your arse-hole in the future.' He could not, for he died the next day. Eyewitnesses tell of homosexuals being tortured to death by tickling, by having their testicles immersed alternately into hot and icy water, by having a broomstick pushed into their anus.
Himmler, who wanted to eradicate homosexuals `root and branch', had theidea to `cure' them by mandatory visits to the camp brothel at Flossenburg. Ten Ravensbruck women provided the services with little success. The women here also were told that they would go free after six months, but instead they were shipped to Auschwitz.
The pink triangles worked in the clay pits of Sachsenhausen, the quarries of Buchenwald, Flossenburg and Mauthausen; they shoveled snow with their bare hands in Auschwitz and elsewhere; they were used as living targets at the firing range; they had the dirtiest jobs in all camps. Towards the end of the war, they were told that they would be released if they let themselves be castrated. The ones who agreed were shipped to the infamous Dirlwanger penal division on the Russian front.
Josef Kohout is the Name; Prisoner No. 1896
By David W. Dunlap
(from the New York Times, June 26, 1995)
WASHINGTON--Transformed by homosexuals from a mark of Nazi persecution into an emblem of gay liberation, the pink triangle gained great currency but lost its link to personal experience.
Today, after half a century, the symbol can be associated once again with one man's name, with his voice, his story.
Josef Kohout is the name; prisoner No. 1896, Block 6, at the Flossenburg concentration camp in Bavaria, near the Czech border. At the age of 24, he was arrested in Vienna as a homosexual outlaw after the Gestapo obtained a photograph he had inscribed to another young man pledging "eternal love." Liberated six years later by American troops, Mr. Kohout returned to Vienna, where he died in 1994.
Among his personal effects was a fragile strip of cloth, two inches 1ong and less than an inch wide, with the numbers 1 8 9 6 on the right and a pink triangle on the left. It is the only one known to have been worn by a prisoner who can be identified; said Dr. Klaus Muller of the United States Holocaust Memorial Museum here.
Together with Mr. Kohout's journal and the letters his parents wrote to the camp commander in a fruitless effort to visit him, the badge has been given to the museum by Mr. Kohout's companion.
"I find it very important that the pink triangle is connected with the people who were forced to wear it," said Dr. Muller, the museum's project director for Western Europe.
In its mission, the museum embraces not only the Jewish victims of the Holocaust, but other groups who were persecuted, like the Gypsies, the disabled, Jehovah's Witnesses and Russian prisoners of war.
It has begun a $1.5 million campaign to locate homosexual survivors and document their experiences, fllowing a suggestion from David B. Mixner, a corporate consultant in Los Angeles who is active in gay causes. The campaign coordinator, Debra S. Eliason, said $350,000 has been pledged so far.
Patrons of the museum are given identification cards of victims to personalize the others vast historical narrative, and of the victims identified in the cards, a handful were homosexuals. On his first visit, Representative Gerry E. Studds of Massachusetts, one of three openly gay members of Congress, coincidentally received the card of Willem Arondeus, a homosexual Dutch resistance fighter who was killed in 1943. "For me to get that card was just stunning," Mr. Studds said.
"Of the many places we never existed, certainly the Holocaust was one, in most people's minds," Mr. Studds said. "The supreme triumph in the last generation, in terms of the struggle of gay and lesbian people, is recognition of the simple fact that we exist."
Mr. Kohout is not the only homosexual victim of Nazism whose presence is being felt. Gradually, at the twilight of their lives, a handful of survivors are stepping forward to press gingerly their own claims for recognition, having all but given up hope for restitution.
"The world we hoped for did not transpire," said a declaration signed earlier this year by eight survivors now living in Germany, France, Poland and the Netherlands. They called for the memorializing and documenting of Nazi atrocities against homosexuals and others.
They pleaded for "the moral support of the public."
The signers included Kurt von Ruffin, now 93, a popular actor and opera singer in Berlin during the 1930's who was sent to the Lichten- burg camp in Prettin, and Friedrich- Paul von Groszheim, 89, who was arrested, released, rearrested, tortured, castrated, released, rearrested and imprisoned in the Neuengamme camp at Lubeck.
Between 10,000 and 15,000 homosexuals may have been incarcerated in the camps, Dr. Mulller said, out of approximately 100,000 men who were arrested under Paragraph 175 of the German criminal code, which called for the imprisonment of any "male who commits lewd and lascivious acts with another male." (The law was silent on lesbianism, although individual instances of persecutions of lesbians have been recorded.)
Perhaps 60 percent of those in the camps died, Dr. Muller said, meaning that even in 1945, there may have been only 4,000 survivors. Today, Dr. Mliller knows of fewer than 15.
Their travails did not end at liberation. They were still officially regarded as criminals, rather than as political prisoners, since Paragraph 175 remained in force in West Germany until 1969. They were denied reparations and the years they spent in the camps were deducted from their pensions. Some survivors were even jailed again.
Old enough to be grandfathers and great-grandfathers, the survivors scarcely courted attention as homosexuals, having learned all too well the perils of notoriety. "It is not easy to tell a story you were forced to hide for 50 years," Dr. Mullers said.
One of the first men to break his silence was the anonymous "Prisoner X. Y.," who furnished a vividly detailed account of life as a homosexual inmate in the 1972 book, "The Men With the Pink Triangle," by Heinz Heger, which was reissued last year by Alyson Publications.
By a coincidence that still astonishes him, Dr. Muller said, Prisoner X. Y.--"the best documented homosexual inmate of a camp"--turned out to be Mr. Kohout.
After his arrest in 1939, Mr. Kohout was taken to the Sachsenhausen camp and served at the Klinker brickworks, which he called "the 'Auschwitz' for homosexuals." Prisoners who were not beaten to death could easily be killed by heavy carts barreling down the steep incline of the clay pits.
In 1940, he was transferred to Flossenbug. On Christmas Eve 1941 inmates were made to sing carols in front at a 30-foot-high Christmas tree on the parade ground. Flanking it were gallows from which eight Russian prisoners had been hanging since morning. "Whenever I hear a carol sung--no matter how beautifully-- I remember the Christmas tree at Flossenburg with its grisly 'decorations,' " he wrote.
Mr. Kohout died in March 1994, at the age of 79. A month later, in an apartment in Vienna, his surviving companion submitted to an interview by Dr. Muller, who had tracked him down through a gay group in Austria and pressed him for more and more information.
As Dr. Muller recalled it, the companion finally said: "If you're so interested in all these details, I have some material in two boxes and, honestly, I didn't have the strength to go through it because I'm still struggling with his death. But if you want to, we could look at these."
The first thing the companion unpacked was Mr. Kohout's pink triangle badge. The Hrst thing Dr. Muller thought was, "This is impossible."
"We had searched for a pink triangle for years," he said, "one that would not only document the Nazi marking system but also could be reconstructed as a part of one individual story."
The triangle itself is still in storage, but part of Mr. Kohout's journal is now on display at the museum. It is the page on which he wrote simply of his liberators' arrival on April 24, 1945: "Amerikaner gekommen."
-- The Nazis arose out of a macho homosexual subculture long-incubating in Germany that idealized warriorship and homosexual pederasty. (Male sex with younger males.) The evidence of this is overwhelming, and has been suppressed in service to the promotion of homosexuality.
A Hitler self-portrait from the Vienna years.
-- Germany A great many of Hitler's friends, both in early years and later, were homosexual. Do straight males normally have homosexuals as friends? Do they surround themselves with homosexuals in career or business? Are straight males normally comfortable around gay males?
Pre-Nazi Germany was rife with homosexuality, including "masculine" homosexual movements
-- Germany was the birthplace of "gay" rights movements, well prior to the rise of the Nazis. It had, in fact, a number of homosexual activists and movements, most notably "Hellenic revival" movements that regarded super-masculinity combined with pederasty to be an ideal.
-- Hitler was a dandy in his teens and had a dandified best friend. Hitler wrote a petulant, jealous letter to him talking about how much it upset Hitler to see the friend talking to others. He said "I demand exclusivity."
Hitler's best friend, August Kubizek, toward whom Hitler expressed intense jealousy if Kubizek even talked to others. Kuzibek wrote several changing versions of his relationship with Hitler. In the first he spoke of Hitler kissing him on the cheek at one greeting. He changed that in a later version.
-- They were bedmates
"Machtan suggests that each of Hitler’s longer-term relationships in his youth -- with Reinhold Hanisch, August Kubizek, Rudolf Hausler and Ernst Hanfstaengl -- were homosexual 'love affairs.' "
-- When on his own, Hitler stayed for a long time first in a men's hostel that was a known center of male prostitution and a magnet for homosexual men. Hitler stayed in such an environment for three years.
"In Germany’s National Vice, Samuel Igra wrote that as a young man Hitler “had been a male prostitute in Vienna and Munich” (Igra:67). Lending credence to this is the fact that for quite a long time Hitler “chose to live in a Vienna flophouse known to be inhabited by many homosexuals” (Langer:192).
That “flophouse” was theMeldemannstrasse Hostel. Hitler’s long-time “gay” friend Ernst Hanfstaengl identified this residence as “a place where elderly men went in search of young men for homosexual pleasures” (Machtan:56).
“It was an open secret at the beginning of the 20th century,” adds Machtan, “that municipal hostels for homeless males were hubs of homosexual activity... [where many young men] kept themselves afloat by engaging in prostitution. Hitler spent over three years in this environment” (Machtan:51).
"This would help to explain Hitler’s close relationships to his purportedly homosexual patrons Dietrich Eckart and Karl Haushofer. Rector writes that, as a young man, Hitler was often called “Der Schoen Adolf”(“the handsome Adolf”) and that later his looks “were also to some extent helpful in gaining big-money support from Ernst Roehm’s circle of wealthy gay friends” (Rector:52).
-- In WWII, witnesses from the eastern front who were around Hitler claimed he had a steady homosexual relationship with another soldier. (The two can be seen in photos.)
"Additional allegations addressed homosexual conduct by Hitler during the first World War. The so-called “Mend Protocol,” a document prepared by German military intelligence under Admiral Canaris, contains the testimony of Hans Mend. Considered highly credible, Mend had this to say about Hitler:
"Meanwhile, we had gotten to know Hitler better. We noticed that he never looked at a woman. We suspected him of homosexuality right away, because he was known to be abnormal in any case. He was extremely eccentric and displayed womanish characteristics which tended in that direction....In 1915 we were billeted in the Le Febre brewery at Fournes. We slept in the hay. Hitler was bedded down at night with “Schmidl,” his male whore. We heard a rustling in the hay. Then someone switched on his electric flashlight and growled, “Take a look at those two nancy boys.” I myself took no further interest in the matter."
"Hitler and “Schmidl” (Ernst Schmidt) were, in Schmidt’s words, “always together” during their war years. They remained very close friends and were occasional housemates for over thirty years (ibid.:89ff).
"A year or so after the incident described by Mend, Hitler supposedly “posed nude for a homosexual officer named Lammers -- a Berlin artist in civilian life -- and subsequently went to bed with him” (ibid.:100).
"This may be the incident to which Rauschning referred when he later told U.S. Investigators “that Lance Corporal Hitler and an officer had been charged with engaging in sexual relations” (ibid.).
-- This was the reason Hitler was not promoted despite good service, according to the military file.
"The homosexual connection certainly helps to explain how Hitler became involved with the nationalists generally, and Ernst Roehm specifically, after the war. It is likely that Roehm’s homosexual inclinations were the reason that Colonel Ritter von Epp, the Freikorps commander, chose Roehm as his adjutant. “There are many indications that the relationship between Roehm and Epp was homoerotic,” writes Machtan,“and Hitler once let slip in later years that Roehm’s homosexuality first became known around 1920” (ibid.:106f). Roehm, in turn, brought Hitler into the homoerotic Freikorps brotherhood.
-- Hitler was seen sleeping with his homosexual lover one night while at the Eastern front in WWII, and a witness described him as being in bed with his little "slut."
-- Adolf Hitler had a record with the the Vienna police in connection to homosexual arrests.
"Desmond Seward, in Napoleon and Hitler...reports that “the files of the Viennese police list him [Hitler] as a homosexual” (Seward:299).
-- Two male prostitutes reported that they had been hired by Adolf Hitler
"He reports, for example on the testimony of Hermann Rauschning, a trusted Hitler confidante...Rauschning reports that he has met two boys who claimed that they were Hitler’s homosexual partners..."
"Eugen Dollman, former member of Himmler’s staff and one-time Hitler interpreter, cited testimonies from the files of the Munich vice squad in which a series of young men identified Hitler as the man who had “picked them up” on the streets for homosexual relations (Machtan:135ff). Dollman himself was also homosexual (ibid.).
-- Witnesses who met Hitler described him as effeminate and womanly. As Hitler developed and went for power, he ratcheted up a hyper-masculine presentation and persona. But this was actually a homosexual and warrior-pederast ideal.
-- Hitler was never known to have a significant relationship with a woman that was definitely sexual. He may have been bisexual, homosexual, or had some other kind of sexual deviancy. However, the evidence points more to sexual relationships with men than with women. In any case, women did not factor into his life in any significant way until his 30s as he came to power, and this was Eva Braun, with whom he had an ambiguous relationship and which was very likely an image show for the German people.
"[A] small number of contemporaries... were pretty explicit on the subject of Hitler's sex life. These include August Kubizek, Kurt Ludecke, Ernst Hanfstaengl, Rudolf Diels, Erich Ebermayer, Eugen Dollman, Christa Schroder and Hans Severus Ziegler. They are all unanimous in stating, quite positively, that Hitler did not have sex with women. Some of them expressly say that Hitler was homosexual; others convey the same thing obliquely" (Machtan:23)
From "The Hidden Hitler"
-- Though women never figured much into Hitler's life till later life with Evan Braun, Hitler spent his life constantly around male company, most of them homosexuals
-- Homosexuals tend to be most comfortable only with other homosexuals, not with straights. They prefer each other, trust each other more, keep each others' secrets, network, and create little mafias for power -- perhaps more than other groups. Because of the secrecy inherent to the lifestyle, Gay brotherhoods" can be very powerful. It is the very nature of tightly knit brotherhoods to be able to get power and dominate situations. Coordinated gay brotherhoods, with their necessity of secrecy, are obviously a very powerful form of brotherhood, helping to account for the astounding rise of the Nazis.
- Hitler had known homosexuals around him and in the highest positions. This included Ernst Rohm, head of the notorious and very dangerous SA that protected the Nazis and raised Hitler to power.
"Hitler was closely associated with Ernst Roehm and Rudolf Hess, two homosexuals who were among the very few people with whom he used the familiar du [“thou”]."
-- In private Hitler never expressed displeasure or moral disapproval privately over the homosexuality of lieutenants and high level Nazis that he knew were homosexuals. This even though he came from a Christian culture and was ostensibly Christian.
-- Hitler's companionship with Eva Braun was a mystery to the German people. It was almost certainly a farce for show, to fool the German people and cover up his homosexuality which would have ruined him with the public. Hitler apparently never made any advances toward Braun.
Ernst Rohm, the brutal, violent open homosexual who most helped Hitler rise to power, leader of the violent SA. He regularly procured boys for sex and had homosexual orgies. He was one of only two persons on such intimate and equal terms with "the Fuhrer" that he could speak to Hitler in the intimate form of "Thou." When Rohm became dangerous to Hitler, Hitler killed him personally along with other dangerous knowers or power threat. He used the killing as PR in which Nazis were' fighting degeneracy,' but it was really a ruse and distraction technique to throw an ink cloud over Hitler and the Nazis' own degeneracy, while getting rid of some very dangerous Nazis who went way back and had the power to blackmail Hitler and create damage as Nazi power and resources grew.
-- Hitler's closest aide, Rudolf Hess, was was known in homosexual .Hitler and Hess lived together in close quarters at Spandau prison for about 9 months. Hitler expressed great fondness for Hess, and dismay, when he was released but Hess kept back.
"When Hitler left the prison he fretted about his friend who languished there, and spoke of him tenderly, using Austrian diminutives: “Ach mein Rudy, mein Hesserl, isn’t it appalling to think that he’s still there.” One of Hitler’s valets, Schneider, made no explicit statement about the relationship, but he did find it strange that whenever Hitler got a present he liked or drew an architectural sketch that particularly pleased him, he would run to Hess — who was known in homosexual circles as “Fraulein Anna” — as a little boy would run to his mother to show his prize to her...Finally there is the nonconclusive but interesting fact that one of Hitler’s prized possessions was a handwritten love letter which King Ludwig II had written to a manservant.
"Hess was known by other names in the German “gay” subculture. In recent years, long sealed Soviet archives have been opened to the West. In Deadly Illusions, authors John Costello and Oleg Tsarev report of seeing the “so-called ‘Black Bertha’ file, named from Hess’s reported nickname in Berlin and Munich”
-- As Hitler rose to power he wrote a letter to his cousin saying that his past must never be known. He manifested an extreme desire to obliterate his past, even having his boyhood village destroyed. The Nazis used Hitler's home village for artillery target practice, making it uninhabitable, driving away old-timers who knew Hitler, and presumably covering up or exposing any possible hidden records. Nazis were always near the site and would beat and drive away anybody who came there. (Why would a lover of German heritage and country do that to his own village?)
-- Hitler was influenced by writers and philosophers advocating the Hellenic (Grecian) ideal of warriorship combined with pederasty as a superior form of sexuality. There were a number of homosexual writers coming up in Hitler's youth who combined intense nationalism with the Grecian warrior-pederasty ideal. In this ideal, sexual relations between boys and older men were considered the highest, purest kind of sex and not a perversion.
The man called "The father of National Socialism" was a homosexual pederast.
After expulsion from the priesthood for gay sex, Lanz Von Liebenfels became a main influence on Hitler. Hitler was a fan of Lanz' pro-Aryan nationalist publication that contained homo-erotic philosophy. Liebenfels designed the Nazi flag and was the first to fly it.
"It was through Lanz that Hitler would learn that many of his heroes of history were also 'practicing homosexuals' " (Waite, 1977:94f).
Hitler was exposed to these writers, including the Theosophist pervert Guido von List, Liebenfels, and others. He had a signed book from the first, and personally knew the second after seeking him out.
-- There were two types of homosexuals in Germany, as today, and there was a conflict between them: The "butch" homosexuals had a hyper masculine ideal. The other was the feminine homosexuals or "femmes," who the butch homosexuals despised. Butch homosexuals in Germany did not consider themselves homosexuals since they were masculine. They considered male-on-male sexual relations to be an aspect of a higher manliness, as in the ancient Hellenic concept. They believed this was the superior form of sex and maleness.
-- The butch gays (the Nazi founders) considered heterosexuals to be lower than them, only valuable as "breeders." The butch gays considered any feminine homosexuals to be lower still than heterosexuals, even subhuman.
-- Jörg Lanz von Liebenfels has been called "The Man Who Gave Hitler His Ideas.", Many of the views and plans expressed by the homosexual Liebenfels were matched to a 'T' in the later Nazi program.
-- Lanz taught that homosexual sex involved Odin energy. Hitler was a fan of Liebenfels' homoerotic nationalist magazine in Vienna.
-- The homosexual Liebenfels designed the Nazi flag and was the first person to ever fly it, over his castle where he housed his male order.
"After being expelled from the monastery, Lanz formed his own occultic order called the Ordo Novi Templi or the Order of the New Temple (ONT).
The ONT was related to the Ordo Templi Orientis or Order of the Temple of the East, which, like List’s organization, practiced tantric sexual rituals" (Howard:91).
-- Hitler was put into contact with Liebenfels through an occult bookstore when he was trying to obtain back issues of Liebenfels' journal, Ostara.
-- As Hitler was rising the former mentor Liebenfels got in trouble with Hitler for writing this:
“Hitler is one of our pupils...you will one day experience that he, and through him we, will one day be victorious and develop a movement that makes the world tremble”
Lanz Von Liebenfels
-- To cover up his association with Liebenfels and throw the public off, Hitler then had the writings of his former teacher banned in Germany.
-- The premier promoter of gay rights (and normalization of homosexuality) in Germany was a Jew named Magnus Hirschfeld. He was a "femme" (a feminine gay), as contrasted to the hyper-masculine homosexuality that Nazi founders admired.
The sexual deviant Magnus Hirschfeld was a pioneer promoter of "gay rights" and the liberalization of attitudes toward homosexuality in Germany. Many "butch" type gay Nazis were obligated to go there in the years leading up for sex (homosexual) offenses, so he had files on them. The masculine butch gays who dominated the Nazi movement did not like the theory of homosexuality as a "third sex" or "a woman trapped in a man's body" and it offended them. They considered themselves to be a higher kind of man, supermen, and believed that a macho-oriented pederasty did not constitute homosexuality. When the Nazis had sufficient power, the records of Hirschfeld's sex institute were burned in a great ceremony. The Nazis pointed to his institute as a symbol of degeneracy. But the real reason was destroy the embarrassing information about the degeneracy of a great many Nazis. This was the famous "Nazi book burning" we see in film. After the event an employee of the institute wrote: "Not 20 percent of the Nazis are sexually normal."
-- The "butch" gays of Nazism did not consider themselves to be really homosexuals, but a superior kind of true men or uber-man. Another writer who firmed up the Nazi mindset considerably was the homosexual Friedrich Nietzsche. He was a rather disgusting creep with an over-long mustache who hated Christianity and spoke of the "superman."
-- The Nazis left gays alone until the high-profile gay antics of Ernst Rohm began to attract attention. Then they decided to send some gays to camps, but only feminine gays. Even so, the Nazis imprisoned only a small fraction of the population of feminine gays. This was done as PR and disinformation to throw the German people off to the truth about the Nazis. And the Nazis were happy to get rid of feminine gays anyway, because they loathed them.
-- Frequently if Nazis arrested people on the basis of "homosexuality," this was just a ruse. Rather, they arrested political enemies using homosexuality as a pretext.
-- German gays made a cult of Richard Wagner's operas. They flocked to them and made the operas pickup sites. Hitler was often in the audience, and was known to adore Wagner's operas. He said "Wagner is my religion."
"...Wagner’s Bayreuth [theatre and home] was “a notorious international rendezvous for prominent homosexuals” whose absorption with Wagner achieved “a cultlike quality” (ibid.:39). One factor in this attraction may have been that Wagner’s sons Richard and Siegfried were homosexuals.
"Richard later committed suicide (ibid.:254). Siegfried, pressured to have an heir, married a woman much younger than himself and had several children but surreptitiously continued his homosexual affairs (Wagner:p.197).
"Hitler was very close to the Wagner family and spent a great deal of time in Bayreuth. He made numerous private visits there between 1925 and 1933, often with male homosexual companions (ibid.:253ff).
"...Machtan cites one incident, however, in which he and Schreck failed to keep an appointment to vacation with their Bayreuth hosts. Instead, Schreck and Hitler turned aside at the Bad Berneckhealth resort, some 20 miles away, where they spent Christmas alone -- the only guests at the inn (ibid.:174).
"Hitler may have had yet darker motives for visiting the Wagner home. Only recently revealed is the accusation by Wagner family members “that Hitler sexually abused the young Wieland [Wagner’s grandson, now past 75] during the ‘20s.” These allegations came to light in a Time magazine interview with American author and former diplomat to Germany, Frederic Spotts, whose research for the book Bayreuth (about the Wagnerian opera festival of the same name) included interviews with the Wagner family (Time, August 15, 1994:56).
“Spotts says that his original source was one of Wieland’s own children... Now a respected academic, Spotts says it was while he was researching “Bayreuth” that he interviewed his source -- who, he insists, is totally reliable and has no reason to lie.
Spotts writes: "This family member told me Hitler sexually abused Wieland in the 1920s when the boy was a preadolescent’...Hitler, who idolized Richard Wagner's supernationalistic operas (as well as his anti-Semitism), had become a close friend of Wieland's mother's. Winifred Wagner gave him the run of the child’s nursery. Far from being revolted by what allegedly happened to him, Wieland avidly collaborated with his right-wing family during World War II (Penthouse, undated:32).
"Weiland later became Hitler’s protégé (Wagner:228) and was exempted from military service by Hitler’s personal intervention (ibid.:105). The weight of the evidence indicates that Hitler was deeply involved in a series of short and long-term homosexual relationships. Even more certain is that he knowingly and deliberately surrounded himself with practicing homosexuals from the time he was a teenager. His later public pronouncements against homosexuality were designed to hide the life-long intimacy -- sexual and/or homoerotic -- which he maintained with the various men he knew and accepted as homosexuals.
-- Two of the Wagner sons turned out gay.
-- Hitler got his inspiration for Nazi political culture from the many Wagnerian operas he watched. He lived out, as it were, a Wagnerian opera pageant in real life with the entire German people as players.
-- Frederic the Great was known to be a homosexual and promulgated homosexuality in his army. Frederic the Great was Hitler's biggest hero.
-- The double-S logo of the SS was designed by a homosexualist.
-- The Nazi "death's head" ring was designed by a homosexualist.
--- Hitler personally designed many aspects of Nazi uniforms and regalia.
-- As the Nazis expanded their invasions many Nazi authorities in occupied countries were homosexuals. (Such as in France.)
"Desmond Seward, in Napoleon and Hitler, quotes Italian dictator, Benito Mussolini, who referred to Hitler as “that horrible sexual degenerate” (Seward:148).
-- There was always a great mystery what Hitler and Eva Braun did during their time in private quarters. Servants who watched their sheets reported that they never saw signs of sexual intercourse.
"Writer Charlotte Wolff, M.D. quotes Magnus Hirschfeld about Hitler in her book Magnus Hirschfeld. (Hirschfeld, you will remember, was Director of the Sex Research Institute of Berlin which was destroyed by Hitler in 1934."
"About three years before the Nazis came to power we had a patient at the Institute who had a liaison with Roehm. We were on good terms with him, and he told us a good deal of what happened in his circle...He also referred to Adolf Hitler in the oddest possible manner. ‘Afi is the most perverted of us all. He is very much like a soft woman, but now he makes great propaganda in the heroic morale’” (Wolff:438).
-- Four women alleged to have had some kind of sexual interaction with Hitler. It has been alleged that Hitler was a coprophile, and demanded that women perform degrading acts. One woman alleged to have had sexual involvement was Hitler's niece, Gely.
"Hitler contemporary Otto Strasser writes of an encounter he had with Hitler’s niece Gely: Next day Gely came to see me. She was red eyed, her round little face was wan, and she had the terrified look of a hunted beast.
“He locked me up,” she sobbed. “He locks me up every time I say no!” She did not need much questioning. With anger, horror and disgust she told me of the strange propositions with which her uncle pestered her.
"I knew all about Hitler’s abnormality. Like all the others in the know, I had heard all about the eccentric practices to which Fraulein Hoffmann was alleged to have lent herself, but I had genuinely believed that the photographer’s daughter was a little hysteric who told lies for the sheer fun of it.
"But Gely, who was completely ignorant of this other affair of her uncle’s, confirmed point by point a story scarcely credible to a healthy-minded man." (Strasser, 1940:72)
All four of the women alleged to have had sexual encounters with Adolf Hitler attempted suicide.
-- Homosexuals are disproportionately represented among those who do heinous violent crimes. Nine of the top 10 serial killers were homosexuals.
-- Homosexuals are attracted to Nazism and neo-Nazism even to this day. The neo-Nazi movement in Germany contains homosexuals.
The Nazi Movement was Riddled with Homosexuals
Many founding and high-ranking Nazis were homosexuals. The evidence indicates that Adolf Hitler himself was a homosexual. Persecution of homosexuals by the Nazis was for show to deflect from themselves and keep the German public fooled. And only feminine type homosexuals were then persecuted. The "Butch" homosexuals who founded Nazism viewed femmes as lower than heterosexuals, not even men. The persecution of a small percentage of Germany's femme gays was a public relations move to obfuscate the Nazis' own perversity and placate the German masses.
-- The Nazi movement was riddled with homosexuals at the highest level. The most obvious was Ernst Rohm, the most powerful Nazi in Germany after Hitler. Homosexuals prefer other homosexuals for company.
-- After coming to power the Nazis did not persecute homosexuals at all. They did so only when their own homosexuality began to surface to the public, especially because of the antics of the obtusely flagrant homosexualist Ernst Rohm. Hitler was in fact Rohm's early protege. The gay Rohm was Hitler's longtime right hand man, head of the violent SA, the most powerful man in Germany beside Hitler, and the man most instrumental in creating the Nazi Party and setting Hitler up.He was also a homosexualist, an open advocate homosexuality, really one of the first militant gay activists. Rohm refused to hide his homosexuality. He was even a member of an organization that pushed for the public acceptance of homosexuality. The other Nazi gays (including Hitler), were of the usual discreet and covert class.
-- When Rohm's antics and stories about homosexuality in the party became a political threat, Hitler made a show of denouncing gays. Still it was only a small percentage of the hated "femme" gays who were sent to camps. The Nazis also used the new "anti-gay" posture as a ruse to trump up charges and put away all manner of political enemy.
-- Nazism was founded by macho-style "butch" homosexuals who pursued a Greek/Hellenic ideal of warrior pederasty. With the abundant evidence of this, there can be only one reason that media powers have hidden this from the White public while otherwise demonizing Hitler and the Nazis 24-7:
The gay agenda (moral degeneracy and the destruction of the natural family) -- is far more important to these powers than is any further demonization of Hitler. This knowledge about the Nazis would harm the gay agenda. The gay agenda is central to their world-enslaving strategy. It is vital that all Whites understand this.
"Ultimately Hitler used and transformed the movement — much as the Romans had abused the paiderastia of the ancient Greeks — expanding and building upon its romanticism as a basis for the Nazi Party (Rossman:103)."
The early Nazi movement contained a great many men who grew up in the Wandervoegel. Songs of the Wandervoegel morphed later Nazi march songs. The famous Nazi salute (extended arm) was created by the Wandervoegel...
Hans Blueher & the Wandervoegel
From "The Pink Swastika"
“In Germany,” writes Mosse, “ideas of homosexuality as the basis of a better society can be found at the turn of the century within the German Youth Movement” (Mosse:87). Indeed, at the same time that Brand and Friedlander were beginning to articulate their dream of a neo-Hellenic Germany to the masses, a youthful subculture of boys and young men was already beginning to act out its basic themes under the leadership of men like Karl Fischer, Wilhelm Jansen and youth leader Hans Blueher. In Sexual Experience Between Men and Boys, homosexualist historian Parker Rossman writes,
"In Central Europe...there was another effort to revive the Greek ideal of pedagogic pederasty, in the movement of Wandering Youth [Wandervoegel]. Modern gay-homosexuality also can trace some of its roots to that movement of men and boys who wandered around the countryside, hiking and singing hand-in-hand, enjoying nature, life together, and their sexuality. Ultimately Hitler used and transformed the movement — much as the Romans had abused the paiderastia of the ancient Greeks — expanding and building upon its romanticism as a basis for the Nazi Party" (Rossman:103).
Another homosexualist, Richard Mills, explains inGay Roots: Twenty Years of Gay Sunshine how the Wandervoegel movement traces its roots to an informal hiking and camping society of young men started in 1890 by a fifteen-year-old student named Hermann Hoffman. For several years the open-air lifestyle of these boys grew increasingly popular. They developed their own form of greeting, the “Heil” salute, and “much of the vocabulary...[which] was later appropriated by the Nazis” (Mills:168). Early in its development, the movement attracted the attention of homosexual men, including the pederasts who belonged to the Community of the Elite. In 1901 a teacher by the name of Karl Fischer (who, as we have mentioned, called himself der Fuehrer) formalized the movement under the name Wandervoegel (Koch:25, Mills:153).
Hans Blueher, then just seventeen years old, organized the most ambitious Wandervoegel excursion to that date in 1905. It was on this trip that Blueher met Wilhelm Jansen, one of the original founders of the Community of the Elite. At this time the Wandervoegel numbered fewer than one hundred young men, but eventually the number of youths involved in Wandervoegel-type groups in Europe reached 60,000.
Wilhelm Jansen became an influential leader in the Wandervoegel, but rumors of his homosexuality disturbed German society. In 1911, Jansen addressed the issue in a circular to Wandervoegel parents. Jansen told them, “As long as they conduct themselves properly with your sons, you will have to accustom yourselves to the presence of so-called homosexuals in your ranks” (Mills:167). Hans Blueher further substantiated the fact that the movement had become a vehicle for homosexual recruitment of boys with his publication of The German Wandervoegel Movement as an Erotic Phenomenon in 1914 (Rector:39f). Mills writes,
"[T]he Wandervoegel offered youth the chance to escape bourgeois German society by retreating back to nature...But how was this accomplished? What made it possible for the lifestyle created within the Wandervoegel to differ significantly from its bourgeois parent? The answer is simple: the Wandervoegel was founded upon homosexual, as opposed to heterosexual sentiments ...In order to understand the success of the movement, one must acknowledge the homosexual component of its leaders...Just as the leaders were attracted to the boys, so were the boys attracted to their leaders. In both cases the attraction was sexually based" (Mills 152-53).
Like many of the “Butch” homosexuals Blueher had married but only for the purpose of procreation. “Woe to the man who has placed his fate in the hands of a woman,” he wrote. “Woe to the civilization that is subjected to womens’ influence” (Blueher in Igra:95).
Foreshadowing the Nazi regime, Blueher “saw male bonding as crucial to the formation of male elites,” writes homosexualist historian Warren Johansson. “The discipline, the comradeship, the willingness of the individual to sacrifice himself for the nation -- all these are determined by the homoerotic infrastructure of the male society” (Johansson:816). Mills adds that Blueher “believed that male homosexuality was the foundation upon which all forms of nation-states are built” (Mills:152). Blueher called his hypothetical political figures “heroic males,” meaning self-accepting masculine homosexuals. It is precisely this concept of the “heroic male” that prompts Steakley to compare Adolf Hitler’s views to those of Blueher and Friedlander.
But this is not the only instance in which the views of Blueher and Friedlander coincide. Like Friedlander, Blueher believed that homosexuals were the best teachers of children. “There are five sexual types of men, ranging from the exclusively heterosexual to the exclusively homosexual,” writes Blueher. “The exclusive heterosexual is the one least suited to teach young people...[but exclusive homosexuals] are the focal point of all youth organizations” (ibid.:154).
Blueher was also anti-Semitic. In writing about his visit with Magnus Hirschfeld and the SHC, Blueher denigrated Hirschfeld’s egalitarian views, complaining that “concepts like rank, race, physiognomy... things of importance to me -- were simply not applicable in this circle.” Igra adds that “[a]according to Blueher, Germany was defeated [in W.W.I] because the homosexualist way of life (die maennerbuendische Weltanschauung) had been considerably neglected and warlike virtues had degenerated under the advance of democratic ideas, the increasing prestige of family life...the growing influence of women “and, above all, the Jews” (emphasis ours -- Igra:97).
Importantly, Blueher’s hostility towards the Jews was not primarily based on a racial theory but on their rejection of homosexuality. Igra writes,
Soon after the defeat [of Germany in W.W.I] Blueher delivered a lecture to a group of Wandervoegel, which he himself had founded. The lecture was entitled “The German Reich, Jewry and Socialism.” He said: ‘There is no people whose destiny...so closely resembles ours as that of the Jews.’ The Jews were conquered by the Romans, lost their State and became only a race whose existence is maintained through the family. The primary cause of this collapse, he says, was that the Jews had failed to base their State on the homoerotic male community and had staked all on the family life, with its necessary concomitant of women’s encouragement of the civic and social and spiritual virtues in their menfolk rather than the warlike qualities (ibid.:97).
Though largely neglected by historians, Blueher was enormously important to Nazi culture. Igra writes that in the Third Reich “Blueher...[was] adopted by the Nazis as an apostle of social reform. And one of his disciples, Professor Alfred Bauemler...[became] Director of the Political Institute at the University of Berlin” (ibid.:75). Writing before the collapse of the Third Reich, he adds that “[Blueher’s teaching] has been systematically inculcated by the Nazi Press, especially Himmler's official organ, Das Schwarze Korps, and has been adopted in practice as the basis of German social organization. The Nazi élite are being brought up in segregated male communities called Ordensburgen. These are to replace the family as the groundwork on which the state is to rest” (Igra:87). The all-male societies of these Ordensburgen (Order Castles) were fashioned after the Wandervoegel.
Through his influence in the Wandervoegel and later as a fascist theoretician, Hans Blueher must be recognized as a major force in the reshaping of Germany. This (and the homosexuality of other Wandervoegel leaders) is acknowledged by homosexualist author Frank Rector:
Blueher's case further explains why many Nazi Gays were attracted to Hitler and his shrill anti-Semitism, for many gentile homosexuals were rabidly anti-Semitic...Gays in the youth movement who espoused anti-Semitism, chauvinism, and the Fuehrer Prinzip (Leader Principle) were not-so-incipient Fascists. They helped create a fertile ground for Hitler’s movement and, later, became one of its main sources of adherents....A substantial number of those Wandervoegel leaders were known homosexuals, and many others were allegedly gay (or bisexual) (Rector:40).
From Boy Scouts to Brownshirts
In the introduction to his book The Pink Triangle, homosexual author Richard Plant writes of his own experience in a Wandervoegel-type group called “Rovers.” “In such brotherhoods,” writes Plant, “a few adolescents had little affairs, misty and romantic sessions around a blazing fire...Other boys...talked openly about ‘going with friends’ and enjoying it. The leaders of these groups tended to disregard the relationships blossoming around them -- unless they participated” (Plant:3).
Blueher himself described the homosexual quality of the group as follows:
"The Wandervoegel movement inspired the youth all around during the first six years of its existence, without awaking the slightest suspicion...towards its own members...Only very seldom might one might notice one of the leaders raising questions of why he and his comrades didn’t want any girls....[later] the name Wandervoegel was mentioned in the same breath as the words “pederasty club” (Blueher:23f).
Richard Plant’s reminiscences also substantiate that the Wandervoegel groups served as a training ground for Nazis. He recalls his friend in the Rovers, “Ferdi, who explained and demonstrated the mysteries of sex to me and my friends.” Plant was later shocked, he says, upon returning to Germany from abroad “to see Ferdi wearing a brown shirt with a red, white and black swastika armband” (ibid.:4).
E.Y. Hartshorne, in German Youth and the Nazi Dream of Victory records the recollections of a former Wandervoegel member who confirms that the organization was the source of important elements of Nazi culture. Our knowledge of the influence of the Community of the Elite on the Wandervoegel may provide us insight into the cryptic comment at the end of the testimony:
"We little suspected then what power we had in our hands. We played with the fire that had set a world in flames, and it made our hearts hot. Mysticism and everything mystical had dominion over us. It was in our ranks that the word Fuehrer originated, with its meaning of blind obedience and devotion. The word Bund arose with us too, with its mysterious undertone of conspiracy. And I shall never forget how in those early days we pronounced the word Gemeinschaft [”community”] with a trembling throaty note of excitement, as though it hid a deep secret" (Hartshorne:12).
Indeed, not only did the grown-up former members of the Wandervoegel become one of Hitler’s main sources of supporters in his rise to power, but the movement itself became the core of a Nazi institution: the Hitler-Jugend (Hitler Youth). So rampant had homosexuality become in the movement by this time that The Rheinische Zeitung, a prominent German newspaper, warned, “Parents, protect your sons from ‘physical preparations’ in the Hitler Youth,” a sarcastic reference to problems of homosexuality in the organization (Burleigh and Wipperman:188). Sadly, the boys themselves had by this time been completely indoctrinated by their homosexual masters. Waite writes,
With the exception of Ehrhardt, Gerhard Rossbach, sadist, murderer, and homosexual was the most admired hero of nationalistic German youth. “In Ehrhardt, but also in Rossbach,” says a popular book on the youth movement, “we see the Fuehrer of our youth. These men have become the Ideal Man, idolized...and honored as can only happen when the personality of an individual counts for more than anything else"...the most important single contributor of the pre-Hitler youth movement [was] Gerhard Rossbach (Waite, 1969:210f).
Hans Peter Bleuel, in Sex and Society in Nazi Germany, points out that most of the adult supervisors of the Hitler Youth were also SA officers (who were almost exclusively homosexual). Rector states that Baldur von Schirach, leader of the Hitler Youth organization, was reportedly bisexual (Rector:56). In Germany’s National Vice, Jewish historian Samuel Igra confirms this, saying Schirach was arrested by the police for perverse sexual practices and liberated on the intervention of Hitler, who soon afterward made him leader of the Hitler Youth (Igra:72). Igra further states that Schirach was known as “the baby” among the inner pederast clique around Hitler (ibid.:74). Rempel reports that Schirach always surrounded himself with a guard of handsome young men (Rempel:88). Psychiatrist Walter Langer in his 1943 secret wartime report, The Mind of Adolf Hitler, also writes of Schirach’s reputed homosexuality (Langer:99).
In 1934, the Gestapo reported forty cases of pederasty in just one troop of the Hitler Youth. Bleuel writes of the case of one supervisor, a 20-year-old man who was dismissed from the Hitler Youth in 1938. Yet he was transferred to the National Socialist Flying Corps (Civil Air Patrol) “and was assigned to supervise work by members of the Hitler Youth Gliding Association and eventually detained to help with physical check-ups — a grievous temptation. The man was once again caught sodomizing young men, but was not dismissed from the NSFK” (the National Socialist Flying Corps) - Bleuel:119).
Conditions were essentially the same in 1941. Bleuel reports of another homosexual flying instructor involved in “at least ten cases of homosexuality with student pilots of the Hitler Youth” and “a student teacher and student ...[who] had committed twenty-eight proven acts of indecency with twenty boys at Hitler Youth and Young Folk camps” (ibid.:119). He adds that “[t]hese cases were only the tip of the iceberg, for few misdemeanors within the Party became public in later years and even fewer came to trial” (ibid.:119).
The prevalence of homosexuality in the Hitler Youth is also confirmed by historian Gerhard Rempel in his book Hitler’s Children: Hitler Youth and the SS:
"Homosexuality, meanwhile, continued on into the war years when Hitler Jugend boys frequently became victims of molestations at the hands of their SS tutors; Himmler consistently took a hard line against it publicly but was quite willing to mitigate his penalties privately and keep every incident as secret as possible" (Rempel:51f).
This last quote from Rempel raises two important points which will be addressed at greater length later in the book, but deserve at least some mention here. The first point is that Heinrich Himmler, who is often cited as being representative of the Nazi regime’s alleged hatred of homosexuals, was obviously not overly concerned about homosexual occurrences in the ranks of his own organization. The second point is that this homosexual activity continued long after Hitler had supposedly purged homosexuals from the Nazi regime (in 1934) and promoted strict policies against homosexuality (from 1935 on). As we shall see later, these policies were primarily for public relations and were largely unenforced.
An interesting sideline to the story of the Hitler Youth illustrates both the control of the youth movement by pederasts and the fundamental relationship between homosexuality and Nazism. In Great Britain, the pro-Nazis formed the Anglo-German Fellowship (AGF). The AGF was headed by British homosexuals Guy Francis de Moncy Burgess and Captain John Robert Macnamara. British Historian John Rempel relates how Burgess, Macnamara and J.H. Sharp, the Church of England’s Arch-deacon for Southern Europe, took a trip to Germany to attend a Hitler Youth camp. Costello writes,
"In the spring of 1936, the trio set off for the Rhineland, accompanied by Macnamara’s friend Tom Wylie, a young official in the War Office. Ostensibly they were escorting a group of pro-fascist schoolboys to a Hitler Youth camp. But from Burgess’ uproariously bawdy account of how his companions discovered that the Hitler Jugend satisfied their sexual and political passions, the trip would have shocked their sponsors -- the Foreign Relations Council of the Church of England" (Costello: 300).
In pre-World War II France, the pro-Nazi faction was represented by the Radical-Socialist Party (RSP) and the Popular Party (PP). The Secretary-General of the RSP was Edouard Pfeiffer. Costello writes of Guy Burgess' visit to Pfeiffer in Paris shortly before the war:
"As a connoisseur of homosexual decadence, Pfeiffer had few equals, even in Paris. As an officer of the French Boy-Scout movement, his private life was devoted to the seduction of youth. Burgess discovered all this when he visited Pfeiffer's apartment in Paris and found...[him] with a naked young man...he explained to Burgess that the young man was a professional cyclist, who just happened to be a member of Jacques Doriot’s Popular Party" (ibid.:315).
Once again we see flagrant sexual perversion in the heart of the Nazi movement -- long after the Roehm Purge. It appears also that the correlation between Nazism and homosexuality disregarded national boundaries. As we have seen, both Hans Blueher and Benedict Friedlander observed that youth organizations are often (in their view, appropriately) led by pederasts. Events in Europe during the first part of the twentieth century, particularly those involving the National Socialists, strongly support this theory.
The revival of Hellenic culture in the German homosexual movement, then, was an integral factor in the rise of Nazism. Right under the nose of traditional German society, the pederasts laid the groundwork for the ultra masculine military society of the Third Reich. The Wandervoegel was certainly not a “homosexual organization” per se, but its homosexual leaders molded the youth movement into an expression of their own Hellenic ideology and, in the process, recruited countless young men into the homosexual lifestyle. The first members of the Wandervoegel grew to manhood just in time to provide the Nazi movement with its support base in the German culture. As Steakley put it, “[the] Free German Youth jubilantly marched off to war, singing the old Wandervoegel songs to which new, chauvinistic verses were added” (Steakley:58).
Gerhard Rossbach and the Freikorps Movement
The Freikorps movement began during the years immediately following the close of World War I. After the war and the subsequent socialist revolution in Germany in 1918, tens of thousands of former soldiers of the German army volunteered for quasi-military service in a number of independent reserve units called Freikorps (Free Corps), under the command of former junior officers of the German army. These units were highly nationalistic and became increasingly violent as the social chaos of the Weimar Republic worsened. Rossbach’s organization, originally called the Rossbachbund (“Rossbach Brotherhood”) exemplified the German Freikorps. As Waite records in Vanguard of Nazism, “the lieutenants and the captains — Roehm...Ehrhardt, Rossbach, Schultz and the rest — formed the backbone of the Free Corps movement. And...it was they who were the link between the Volunteers [anti-communists] and National Socialism” (Waite, 1969:45). Once again we see the essential relationship between homosexuality and Nazism, since many of these “lieutenants and captains” were known or probable homosexuals, some of whom eventually served in the SA. German historian and Hitler contemporary Konrad Heiden writes that “[m]any sections of this secret army of mercenaries and murderers were breeding places of perversion” (Heiden:30). Historian G. S. Graber agrees:
"Many...[Freikorps] leaders were homosexual; indeed homosexuality appears to have been widespread in several volunteer units. Gerhard Rossbach...was an open homosexual. On his staff was Lieutenant Edmund Heines who was later to become the lover of Ernst Roehm" (Graber:33).
Waite’s analysis shows that the Freikorps movement was one intervening phase between the Wandervoegel movement and the Nazi Stuermabteilung — the SA. “The generation to which the Freikorpskaempfer [‘Free Corps warriors’] belonged,” writes Waite, “the generation born in the 1890s — participated in two experiences which were to have tremendous effect on his subsequent career as a Volunteer [in the Freikorps]. The first of these was the pre-war Youth Movement; the second, World War I” (Waite, 1969:17). The young men who had been molded by the Hellenic philosophies of the youth movement had come of age just in time to fight in the first World War. There, they were further shaped and seasoned by the hardships and horrors of trench warfare.
It was in the trenches of World War I that the concept of Stuermabteilung (Storm Troops) was developed — elite, hard-hitting units whose task it was to “storm” the enemy lines. The tactics of the Storm Troopers proved to be so effective that they were quickly adopted throughout the German army. The Storm Troop system created a tremendous increase in the number of young commanders of a certain breed. Waite writes,
"Only a very special type of officer could be used. He must be unmarried, under twenty-five years of age, in excellent physical health...and above all he must possess in abundance that quality which German military writers call ruthlessness. The result was that at the time of the Armistice Germany was flooded with hundreds of capable, arrogant young commanders who found an excellent outlet for their talents in the Free Corps movement" (ibid.:27).
It is not difficult to recognize that the description of the preferred Storm Trooper is a model of the Wandervoegel hero: ultra masculine, militaristic, physically conditioned, largely unrestrained by Judeo-Christian morality, and guided by the “Fuehrer Principle” (ibid.:28). It is no wonder, then, that many of these men became youth leaders in their turn (ibid.:210). In the preceding chapter, we learned that homosexual sadist and murderer Gerhard Rossbach was “the most important single contributor to the pre-Hitler youth movement” and a “hero to nationalistic German youth.” In the days before Baldur von Schirach developed the Hitler Youth, Rossbach organized Germany’s largest youth organization, named the Schilljugend (“Schill Youth”) in honor of a famous Prussian soldier executed by Napoleon (ibid:210n).
But Rossbach’s contribution to the Nazis was far greater than the mere shaping of young men into Nazi loyalists. It was Rossbach who formed the original terrorist organization which eventually became the Nazi Storm Troopers, also known as “Brown Shirts.” Both the Rossbach Storm Troopers and the Schilljugend were notorious for wearing brown shirts which had been prepared for German colonial troops, acquired from the old Imperial army stores (Koehl:19). It is reasonable to suppose that without Rossbach’s Storm Troopers, Adolf Hitler and the Nazis would never have gained power in Germany. Heiden describes them:
"Rossbach’s troop, roaring, brawling, carousing, smashing windows, shedding blood...was especially proud to be different from the others. Heines had belonged to it before joining Hitler; then Rossbach and Heines had formed a center with Roehm; it led the SA while Hitler was under arrest [for leading the Beer Hall Putsch]" (Heiden, 1944:295).
Rossbach’s Freikorps was formed almost exclusively of homosexuals. As fascist novelist, Edwin Dwinger, would later declare through one of his characters, Captain Werner, “Freikorps men aren’t almost all bachelors for nothing. Believe me, if there weren’t so many of their kind, our ranks would be pretty damn thin” (Theweleit, Vol 1:33). Rossbach’s adjutant, Edmund Heines, was another pederast and a convicted murderer who later became Ernst Roehm’s adjutant in the SA (he was also the sexual partner of Rossbach, Roehm and possibly Hitler as well). During the incident known as “The Night of the Long Knives” in which Hitler killed Roehm and a number of other SA leaders, Heines was surprised in bed with a young SA recruit (Gallo:236). Historian Frank Rector describes Heines:
"Distinguished by a girlish face on the body of a truck driver, Heines was an elegant, suave, and impeccably groomed killer. He liked to shoot his victims in the face with his 7.65 Walther automatic or beat them to death with a club...In addition to Heines’ value as a first rate adjutant, gifted administrative executive, and aggressive and adroit SA leader, Heines had a marked talent as a procurer [of boys]...garnering the fairest lads in the Fatherland for...sexual amusement" (Rector:89).
Perhaps because of Edmund Heines’ special talent, Rossbach assigned him to develop the Schilljugend. Igra tells how he profited thereby:
"Edmund Heines, the group-leader of the storm troops at Breslau, was a repulsive brute who turned the Nazi headquarters of the city into a homosexual brothel. Having 300,000 storm troopers under his command he was in a position to terrorize the neighborhood...One of his favorite ruses was to have members of the youth organization indulge in unnatural practices with one another and then threaten their parents that he would denounce these youths to the police...unless he received...hush money. Thus Heines not only indulged in homosexual orgies himself — he was often Roehm’s consort in this — but he promoted the vice as a lucrative business" (Igra:73).
Ernst Roehm and the Development of the SA
Next to Adolf Hitler, Ernst Roehm was the man in Germany most responsible for the rise of Nazism, indeed of Hitler himself. Rector writes that “Hitler was, to a substantial extent, Roehm’s protégé” (Rector:80). A driving force behind the National Socialist movement, Roehm was one of the early founders of the Nazi Party. Both Roehm and Hitler had been members of the socialist terrorist group called the Iron Fist (Heiden, 1944:89).
It was at a meeting of the Iron Fist that Roehm reportedly met him and “saw in Hitler the demagogue he required to mobilize mass support for his secret army” (Hohne:20). With Roehm’s backing, Hitler became the first president of the Nazi Party in 1921 (ibid.:21). Shortly thereafter, Rossbach’s Freikorps, integrated into the Party first under Herman Goering’s and then Roehm’s authority, was transformed into the dreaded Nazi SA.
In his classic Nazi history, The Rise and Fall of the Third Reich, author William Shirer describes Ernst Roehm as “a stocky, bull-necked, piggish-eyed, scar-faced professional soldier...[and] like so many of the early Nazis, a homosexual” (Shirer:64). Roehm was recruited into homosexuality by Gerhard Rossbach (Flood:196). Rector elaborates,
"Was not the most outstanding, most notorious, of all homosexuals the celebrated Nazi leader Ernst Roehm, the virile and manly chief of the SA, the du buddy of Adolf Hitler from the beginning of his political career? [Hitler allowed Roehm the rare privilege of addressing him with the familiar form “thou,” indicating intimate friendship]. Hitler’s rise had in fact depended upon Roehm and everyone knew it. Roehm’s gay fun and games were certainly no secret; his amorous forays to gay bars and gay Turkish baths were riotous. Whatever anti-homosexual sentiments may have been expressed by straight Nazis were more than offset by the reality of highly visible, spectacular, gay-loving Roehm. If there were occasional ominous rumblings and grumblings about “all those queers” in the SA and Movement, and some anti-gay flare-ups, homosexual Nazis felt more-or-less secure in the lap of the Party. After all, the National Socialist Party member who wielded the greatest power aside from Hitler was Roehm" (Rector:50f).
Consistent with the elitist philosophies of Benedict Friedlander, Adolf Brand, and Hans Blueher, Roehm viewed homosexuality as the basis for a new society. Louis Snyder, prominent historian of the Nazi era, writes,
"[Roehm] projected a social order in which homosexuality would be regarded as a human behavior pattern of high repute...he flaunted his homosexuality in public and insisted that his cronies do the same. What was needed, Roehm believed, was a proud and arrogant lot who could brawl, carouse, smash windows, kill and slaughter for the hell of it. Straights, in his eyes, were not as adept in such behavior as practicing homosexuals" (Snyder:55).
Under Roehm, the SA became the instrument of Nazi terrorism in German society. It was officially founded on August 3, 1921, ostensibly as a “Special section for gymnastics and sport,” but in his first directive to the group, Hitler defined the SA’s purpose as “a means of defense for the movement, but above all a training school for the coming struggle for liberty” (Heiden, 1935:82f).
Historian Thomas Fuchs reports that “The principle function of this army-like organization was beating up anyone who opposed the Nazis, and Hitler believed this was a job best undertaken by homosexuals” (Fuchs:48f). At first serving simply to protect the Nazis’ own meetings from disruptions by rivals and troublemakers, the SA soon expanded its strong-arm tactics to advance Nazi policies and philosophies. In a 1921 speech in Munich, Hitler set the stage for this activity: “[the] National Socialist movement will in future ruthlessly prevent if necessary by force all meetings or lectures that are likely to distract the minds of our fellow citizens...”In Mein Kampf, Hitler describes an incident (when his men were attacked by Communists adversaries) which he considered the baptismal act of the SA:
"When I entered the lobby of the Hofbrauhaus at quarter to eight, I no longer had any doubts as to the question of sabotage...The hall was very crowded...The small assault section was waiting for me in the lobby...I had the doors to the hall shut, and ordered my men — some forty-five or -six — to stand at attention...my men from the Assault Section — from that day known as the SA — launched their attack. Like wolves in packs of eight or ten, they threw themselves on their adversaries again and again, overwhelming them with blows...In five minutes everyone was covered with blood. These were real men, whom I learned to appreciate on that occasion. They were led by my courageous Maurice. Hess, my private secretary, and many others who were badly hurt pressed the attack as long as they were able to stay on their feet" (Hitler:504f).
In all actions the SA bore Roehm’s trademark of unabashed sadism. Max Gallo describes the organization:
"Whatever the SA engage in — whether they are torturing a prisoner, cutting the throat of an adversary or pillaging an apartment — they behave as if they are within their rights, as artisans of the Nazi victory...They are the SA, beyond criticism. As Roehm himself said many times: “The battalions of Brown Shirts were the training school of National Socialism" (Gallo:38).
The favorite meeting place of the SA was a “gay” bar in Munich called the Bratwurstgloeckl where Roehm kept a reserved table (Hohne:82). This was the same tavern where some of the early meetings of the Nazi Party had been held (Rector:69). At the Bratwurstgloeckl, Roehm and associates — Edmund Heines, Karl Ernst, Ernst’s partner Captain [Paul] Rohrbein, Captain Petersdorf, Count Ernst Helldorf — would meet to plan and strategize. These were the men who orchestrated the Nazi campaign of intimidation and terror. All of them were homosexual (Heiden, 1944:371).
Indeed, homosexuality was all that qualified many of these men for their positions in the SA. Heinrich Himmler would later complain of this: “Does it not constitute a danger to the Nazi movement if it can be said that Nazi leaders are chosen for sexual reasons?” (Gallo:68). Himmler was not so much opposed to homosexuality itself as to the fact that non-qualified people were given high rank based on their homosexual relations with Roehm and others. For example, SA Obergruppenfuehrer (Lieutenant General) Karl Ernst, a militant homosexual, had been a hotel doorman and a waiter before joining the SA. “Karl Ernst is not yet thirty-five, writes Gallo, he commands 250,000 men...he is simply a sadist, a common thug, transformed into a responsible official” (ibid.:50f). Later, Ernst became a member of the German Parliament (Machtan:185). Gallo writes,
"Roehm, as the head of 2,500,000 Storm Troops had surrounded himself with a staff of perverts. His chiefs, men of rank of Gruppenfuehrer or Obergruppenfuehrer, commanding units of several hundred thousand Storm Troopers, were almost without exception homosexuals. Indeed, unless a Storm Troop officer were homosexual he had no chance of advancement” (Knickerbocker:55).
Otto Friedrich’s analysis in Before the Deluge is similar:
"Under Rohm, the SA leadership acquired a rather special quality, however, for the crude and blustering Oberster SA Fuehrer was also a fervent homosexual, and he liked to surround himself, in all the positions of command, with men of similar persuasions" (Friedrich:327).
In the SA, the Hellenic ideal of masculine homosexual supremacy and militarism had finally been realized. “Theirs was a very masculine brand of homosexuality,” writes homosexualist historian Alfred Rowse, “they lived in a male world, without women, a world of camps and marching, rallies and sports. They had their own relaxations, and the Munich SA became notorious on account of them” (Rowse:214). The similarity of the SA to Friedlander’s and Brand’s dream of Hellenic revival is not coincidental. In addition to being a founder of the Nazi Party, Ernst Roehm was a leading member of the Society for Human Rights, an offshoot of the Community of the Elite (J. Katz:632).
The relaxations to which Rowse refers in the above quote were, of course, the homosexual activities (many of them pederastic) for which the SA and the CE were both famous. Hohne writes,
"[Roehm] used the SA for ends other than the purely political. SA contact men kept their Chief of Staff supplied with suitable partners, and at the first sign of infidelity on the part of a Roehm favorite, he would be bludgeoned down by one of the SA mobile squads. The head pimp was a shop assistant named Peter Granninger, who had been one of Roehm’s partners...and was now given cover in the SA Intelligence Section. For a monthly salary of 200 marks he kept Roehm supplied with new friends, his main hunting ground being Geisela High School Munich; from this school he recruited no fewer than eleven boys, whom he first tried out and then took to Roehm" (Hohne:82).
Although the original SA chapter in Munich was the most notorious, other SA chapters were also centers of homosexual activity. In Political Violence and the Rise of Nazism, Richard Bessel notes that the Silesian division of the SA was a hotbed of perversion from 1931 onward (Bessel:61).
Roehm and his closest SA associates were among the minority of Nazi homosexuals who did not take wives. Whether for convention, for procreation, or simply for covering up their sexual proclivities, most of the Nazi homosexuals had married. Some, like Reinhard Heydrich and Baldur von Schirach, married only after being involved in homosexual scandals, but often these men, who so hated femininity, maintained a facade of heterosexual respectability throughout their lives. As Machtan notes, “That Hitler...encouraged many of them to marry should not be surprising: every conspiracy requires camouflage” (Machtan:24). These were empty marriages, however, epitomized by one wife’s comment: “The only part of my husband I’m familiar with is his back” (Theweleit:3).
As we have seen, then, the SA was in many respects a creation of Germany’s homosexual movement, just as the Nazi Party was in many ways a creation of the SA. Before we take a closer look at the formation and early years of the Nazi Party, we must examine two other very important movements which contributed to Nazism. These are the occult Theosophical-Ariosophical movement, and the intellectual movement which created the National Socialist philosophy. Both of these movements, which are integral to our understanding of the Nazi Party and its actions, were also heavily influenced by homosexuals.
The Daily Show Uncovers The Nazis’ Secret Gay Past
by Frances Martel | 7:08 pm, July 29th, 2010
More than half a century later, the journalists of The Daily Show have uncovered the real reason for the inhuman cruelty of the Third Reich: every last one of those Nazis was gay. Or at least that’s what Scott Lively, the president of the organization Defend the Family, contends. In search of the truth, Jason Jones went deep into the heart of New York’s gay community to find evidence for Lively’s argument.
Lively argues, with full certainty, that all the soldiers in the Nazi army, and especially Adolf Hitler and the people in his inner circle, were gay. The reason for the high concentration of homosexuality in the Naxi party, according to Lively? “Adolf Hitler,” he explained, “used homosexual soldiers because they were more savage than natural men… They didn’t have the restraint that a normal man has, and so it was easier for them to do some of the terrible things the Nazis did.” And not only were the Nazis gay, “they met in a gay bar.” As for that whole “persecution of the Nazis” thing, Lively contends that the Nazis persecuted gays to”distract public attention away from their own homosexuality.”
To experience the savagery of gay soldiers first-hand, Jones put on a riot-proof suit and trekked on to the New York Gay Community Center, where he sat down with a number of real-life gays, including LGBT rights activist Lt. Dan Choi, and discussed their secret Nazi past. Needless to say, it was scarier than anything Jones could have imagined.
The Gay Man Who Defeated Nazi Germany
By Charles McCain
[Editor’s note from GayPolitics.com: In light of the ongoing debate over whether
openly gay people should be able to serve in the U.S. armed forces, it’s worth
remembering that gay people already serve with distinction, and that some of those
discharged for being gay may have taken with them extraordinary skills or talents
necessary for success in places like Iraq and Afghanistan. Author Charles McCain,
a World War II expert guest posting here, contributes the following commentary
about one such hero, whose incalculable contribution to the Allied effort to defeat
Hitler’s Germany is not widely known.]
In 1952, the man who discovered the Ultra Secret was convicted of “charges of committing acts of gross indecency with another man.” The defendant was a rumpled Cambridge mathematics professor who had done something important in the war. Still did a bit of secret work for the government. He looked a regular sort of chap but he wasn’t – he was a poof, a Nancy boy, a queer.The judge gave him two choices: prison or chemical castration through the injection of female hormones. This to one of the handful of men responsible for Allied victory over Nazi Germany in World War Two – a man whose ideas changed our world. He chose the humiliation of being injected with estrogen – the doses so high he developed breasts.
Upon conviction, his security clearance was revoked by the British Government and he was dismissed. Men, straight men
– the ones who ran the intelligence establishment – were happy to see him go, no doubt. Don’t need that sort around. Did something very hush-hush during the war. Not sure what exactly. Good riddance to bad trash. But they couldn’t let this man just wander off. He knew too much – about what, no one actually knew. What this man had done in the war was so beyond ‘top secret’ the British government had created a fourth level of secrecy. Prime Minister Winston Churchill is thought to have said, “this is so secret it must ever be the Ultra Secret.” And Ultra it became, the very highest level of security in Great Britain. Only a very few men in the world knew the entire scope of this mind boggling secret. Alan Turing was one of those men.
Dwight Eisenhower, Supreme Commander of all Allied Forces in Europe, considered the Ultra Secret, “decisive” to our victory over Nazi Germany. Yet only a few of his subordinates ever saw intelligence from Ultra and while they knew it was absolutely reliable they had no idea where it came from. It was so secret, so critical to victory, we still don’t know the lengths to which the Allies went to protect it.
Did we assassinate men and women in German occupied Europe who may have known one small detail of the Ultra Secret? Most certainly. Mount hundreds of military operations to protect the secret by deceiving the Germans as to the origin of our intelligence? Yes. Lie, violate the ‘rules of war’, deceive our own commanders, break into diplomatic mail, kill anyone who might tell the Germans we knew the secret? Yes. Do we know the details? No, they have never been released to this day. The only thing we know for certain is this: the Allies would have done anything, gone to any length, to protect the Ultra Secret uncovered by Alan Turing.
After Turing had his security clearance revoked, MI5, the British Internal Security agency, as ignorant as they were small minded, watched him constantly because he knew the Ultra Secret – although they didn’t use that term since the designation of Ultra was itself Ultra Secret. They trailed him, harassed him, treated him with the worst kind of contempt – because he was a fruit, a homo, a faggot. Treated him so badly, in fact, that in March of 2009, just over one year ago, then British Prime Minister Gordon Brown made an official apology on behalf of the British Government for the way Alan Turing had been treated simply because he was gay.
Unfortunately, Her Majesty’s Government was fifty-five years too late. On 7 June 1954, police reported that a Cambridge mathematics professor named Alan Turing had committed suicide by biting into an apple laced with cyanide. Was he so depressed he committed suicide? His mother and his brother said no nor did they ever accept the explanation given by the police. So the speculation continues: did he kill himself or was he killed? If so, who killed himIn 1974 the British government authorized the publication of a book simply titled The Ultra Secret. What the book revealed was so shocking, so incredible, so unimaginable it changed everything we knew about the Second World War. And what it revealed was this: during World War Two the British, and later the Americans, read almost 90% of all top secret German radio traffic – and the Germans used radio as their primary method of communicationBecause of gay activists in London we also learned something else: the key player in the Ultra Secret was a gay man named Alan TuringAnd this is how it helped us: “During the great campaigns on land or in desperate phases of the war at sea, exact and utterly reliable information could thus be conveyed, regularly and often instantly, mint-fresh, to the Allied commanders.” wrote historian Ronald Lewin in Ultra Goes To War.
Often we decrypted Ultra messages as fast as the Germans did. And what did we learn? Almost everything: battle plans, dates of attack, the position of every ship, plane, U-Boat, soldier – we knew almost all. And we knew it all because of a homosexual named Alan Turing.
To prevent anyone from understanding the secret information they were broadcasting, the German armed forces used a coding machine so complex the British called it the Enigma. It was unbreakable. Completely and totally secure. Only it wasn’t. Why? Because in one of his many flashes of genius, mathematician Alan Turing, who was working for the British military, figured out how to crack messages coded by the Enigma. There was a small hitch. In order to perform the actions required to crack the Enigma, Turing had to invent a machine of some sort – a machine which had never existed before.
The Oxford Companion to World War Two gives this bland explanation: “Turing, Alan (1912-1954). British mathematician whose theories and work … resulted in the modern computer.” Today, the ‘Nobel Prize’ of the computing world is the Turing Award—so named to honor Alan Turing as the father of the computer age. He changed the world. Yet few gay men or gay women know of him.
Turing worked for the British military and naturally had clearance for Ultra since he created it. Yet even with Turing on our side, even knowing all we did, it still required the combined might of the three strongest nations in the world – Great Britain, the United States, and the Soviet Union – to defeat Nazi Germany. What if we hadn’t known as much as we did? What if Alan Turing hadn’t cracked the Enigma, invented the computer, and given us the Ultra Secret? What if the British military had not hired Turing because of his homosexuality? The alternative is unthinkable. Somehow gay people are left out when the ‘Greatest Generation’ is honored. Let us therefore insist, beginning from this very moment, that whenever the ‘Greatest Generation’ is remembered, we remember Alan Turing, the greatest of them all.
"EXTERMINATION THROUGH WORK"
The major concentration camps within the German Reich became significant economic enterprises during the war as their purposes shifted from correction of behavior to exploitation of labor. After establishing the German Earth and Stone Works in 1938, the SS erected several new concentration camps near quarries, while brickworks and other factories were attached to existing German camps. Technologically primitive, these operations relied heavily on the manual labor of large numbers of camp inmates working in inhuman conditions.
Homosexuals in these camps were almost always assigned to the worst and often most dangerous work. Usually attached to "punishment companies," they generally worked longer hours with fewer breaks, and often on reduced rations. The quarries and brickyards claimed many lives, not only from exertion but also at the hands of SS guards who deliberately caused "accidents."
After 1942, the SS, in agreement with the Ministry of Justice, embarked on an explicit program of "extermination through work" to destroy Germany's imprisoned "habitual criminals." Some 15,000 prisoners, including homosexuals, were sent from prisons to camps, where nearly all perished within months.
Dr. Carl Peter Værnet
1893 - 1965
Matt & Andrej Koymasky, 1997 - 2004 The Gay Holocaust - Nazi Criminals
6.2.2 - Dr. Carl Peter Værnet (1893 - 1965)
As was true with other prisoner categories, some homosexuals were also victims of cruel medical experiments, including castration. At Buchenwald concentration camp, SS physician Dr. Carl Værnet performed operations designed to convert men to heterosexuals: the surgical insertion of a capsule which released the male hormone testosterone. Such procedures reflected the desire by Himmler and others to find a medical solution to homosexuality.
Carl Værnet was born in the village of Astrup (by Aarhus) on the 28th of April 1893 and grew up in a fairly well-off farmer family in Jutland, Denmark. He was educated as a physician at the University of Copenhagen where he obtained his degree of medicine in 1923.
Værnet established himself as a general practitioner near Copenhagen in 1927 and quickly built up a successful practice. He was especially engaged in developing hormone therapies for various diseases, and he also researched shortwave and microshortwave therapy.
According to Værnet's own post-war accounts, he took various postgraduate courses with prominent professors in Germany, Holland and France, and achieved a reputation as a society doctor and became prosperous.
In those years hormone therapy was regarded as a possible cure for a much wider range of diseases than today, and great hopes were also directed towards shortwave therapy. Carl Værnet claimed that he could cure cancer with his methods, and around 1940 he claimed that he had developed a hormone therapy which could convert the sexual orientation of homosexual persons.
The method was to insert an artificial gland containing the male hormone testosterone into the groin of the patient. The functional novelty of the "gland" was that it could release constant doses of hormone into the patient thereby enabling a therapy over a long period.
Værnet was a member of the Danish National Socialist (Nazi) Party from the late 1930's. In April 1940 Denmark was occupied by Nazi Germany, and during the following years fewer and fewer patients visited his clinic because his positive attitude towards the Germans was well known. As a result of declining patients and the general business downturn during the war, by 1943 Carl Værnet realised that he could not get enough money in Denmark to carry on his hormone research.
In December 1943 Værnet was named SS-turmbannführer (Major), and he was placed under an SS medical company in Prague, Deutsche Heilmittel GmbH. On the 26th of February 1944 Værnet and his family moved on to Prague where they were installed in a big flat originally belonging to a deported Jewish family.
Værnet visited Buchenwald at least six times between June and December 1944 from his base in Prague. His closest partners were the SS garrison doctor in Buchenwald Gerhard Schiedlausky, who after the war was hanged for participation in medical experiments (the Ravensbrück Process in Hamburg 1946-47), and Erwin Ding, who was in charge of typhoid experiments in Buchenwald which cost at least 150 inmates their lives.
In a letter from Vaernet in Prague to his SS-employers at Deutsche Heilmittel, the 31st of August 1944 he reports that his operations are resceduled to 2-3 weeks later due to an airraid. The first run of operations took place the 13th of Semptember 1944, the second on the 8th of December 1944. Studenternes efterretningstjeneste claimed that a total of 30-40 kz-inmates were operated on.
Carl Værnet operated on a total of 17 male KZ-inmates who were forced to undergo an operation with the artificial gland. Værnet used various types of persons for his experiments - homosexuals, non-homosexuals, criminals, non-criminals.
He performed gruesome medical experiments on gay concentration camp prisoners in Buchenwald and Neuengamme, in a bid to "cure" their homosexuality. Værnet's research was on the authority of Gestapo chief Heinrich Himmler, who had called for the "extermination of abnormal existence". There is a document showing that Carl Værnet castrated KZ homosexual inmates.
A SS-letter to Carl Værnet, the 3rd of January 1945 mentions 13 operated inmates and the names of the 7 operated (out of a group of 10) on the 8th of December 1944: Reinhold, Schmith, Ledetzky, Boeck, Henze (who died), Köster and Parth. Before that, the 13.9.1944 3 patients out of 5 were operated on. 2 were catrated, 1 was sterilized at the Buchenwald camp. 2 of the patients died immediately , (1 of these as a direct result of the operation, a big phlegmose/tissue-inflamation, the other (Henze) from infectious bowels catarr and severe emaciation the 21st of December 1944.) leaving 11 to die shortly thereafter.
Once, Værnet's experiment was disrupted by an air-raid alarm, but the 30th of October 1944 Carl Værnet informed SS Reichs Doctor, Obergruppenfüehrer Grawitz*:
"The operations in Weimar-Buchenwald were performed the 13th September 1944 on five homosexual camp inmates. Of them 2 were castrated, 1 sterilized and 2 not operated. All 5 had the "artificial male sexual gland" implanted in different sizes..."
A week before Carl Vaernet wrote of the trialperson no. 1, inmate no. 21.686 Bernhard Steinhoff, a 55 year old gay man, after the castration:
"The operation wound has healed, and there is no rejection of the implantated sexual gland. The person feels better and had dreams of ladies..."
Carl Værnet addressed a final report to Heinrich Himmler on the 10th of February 1945 where he described his hormone projects and his alleged results without even mentioning his experiments in Buchenwald. This omission suggests that his research in Prague and Buchenwald was probably deemed - even by him - a failure; or at least not sufficiently credible to merit a mention.
We do not precisely know how many underwent an operation; 13, 15 or more has been mentioned, but the letters makes certain that an Endlösung of the homosexual problem had as high a priotity as the Jew question only did the program start in the last months of the third reich where everything disintegrated. As time went by the reports of successes dried up and Værnet was not accesible of his employers.
In February and March 1945 Carl Værnet returned to Denmark. When Denmark was liberated on the 5th of May 1945, Carl Værnet was arrested and detained at Alsgade Skole camp in Copenhagen. Various Danish police officers who had been inmates in Buchenwald could confirm that Værnet had visited the camp wearing an SS uniform, so there was no way for him to deny that.
The leader of the British Military Mission at Alsgades Skole in September 1945, Major Hemingway, stated that Værnet "undoubtedly will be sentenced as a war criminal". But dring his detention Carl Værnet succeeded in awakening the interest of the British and Danish authorities in his hormone treatment ideas. He was allowed to keep contact with the outside world from his cell for the purpose of promoting his hormone therapies. Værnet seems to have gained promising contacts with the British-American pharmaceutical company "Parke, Davis & Comp. Ltd., London & Detroit" and maybe also with the American chemical giant Du Pont.
In November 1945 Værnet was hospitalised in Copenhagen, and the authorities at Alsgades Skole agreed to release him. But the charges against Carl Værnet were not dropped. He allegedly suffered from a critical heart condition, and in February 1946 he was discharged from hospital and allowed to go to his brother's farm in the countryside as a convalescent. The consultant doctor of the hospital, Tage Bjering, had declared that Carl Værnet suffered from a critical, chronic heart condition for which there was no cure at the time. Bjering estimated that Værnet could probably only live one or two more years "and maybe not even that long".
From research material, however, it is clear that Carl Værnet's electro-cardiogramme was normal, and that he received no treatment during his three months in hospital; he merely stayed there. During his stay he typed long letters to his business partners about his hormone therapies, and promoted his ideas to various corporations abroad.
Danish and British authorities were implicated in aiding Værnet's escape to Argentina avoiding justice after the end of World War Two; successive Danish governments covered up his crimes against humanity for over 50 years. Værnet settled in the Argentine capital, Buenos Aires. From around 1950 Carl Værnet had his own clinic as a general practitioner in Buenos Aires.
After a few years in the health ministry in Buenos Aires he opened a private pratice at Calle Uriarte 2251, but his clinic was not as successful as in Copenhagen. He never really learned the language or to know the people. He changed name again to the spanish Carlos Peter Varnet and lived in constant terror of being found out.
In 1959 and again in 1965 Carl Værnet tried to sound out through his son Kjeld Værnet whether the Danish authorities would refrain from pressing charges against him if he went back to Denmark. On both occasions the answer was that the authorities could give no such guarantee. Therefore Værnet stayed in Argentina till he died in November 1965, living openly there with the full knowledge of the Danish and Allied authorities.
A letter by Peter Tatchell of "OutRage!" to the Danish government on the 16th of March 1998 triggered the reopening of the Værnet case in the Danish media. This case had previously been only sporadically mentioned in Danish newspapers - in the late 1940's and the 1980's.
Refusing to launch an inquiry into Tatchell's allegation that Værnet had committed crimes against humanity and that his escape from justice had been aided by prominent Danish citizens, the government of Denmark advised Tatchell to do the criminal investigation himself. It referred him to the Danish National Archives. When Tatchell approached the National Archives, he was told that the files on Værnet were not open to the public and that they could not be examined for 80 years from 1945.
The Danish government did not answer OutRage! for over a year - until June 1999. The Danish Prime Minister passed the buck to the Ministry of Justice and the Ministry of Justice passed the buck to Tatchell and the National Archives of Denmark.
Unofficially, however, the government's reaction was to eventually give Danish journalists and historians access to the hitherto closed files. The most important details of Værnet's story were subsequently finally revealed in Danish newspaper stories in the summer and autumn of 1999.
Carl Værnet was a prisoner in the Alsgades Skole (school) POW detention centre in Copenhagen from June to November 1945. Here Danish war criminals and other people who were suspected of cooperation with the Germans were detained.
The detention centre was run jointly by the British Military Mission in Denmark and by the Danish intelligence service under the jurisdiction of the Danish police. The leading officer at Alsgade Skole was a British Major, Ronald F. Hemingway, who gave his permission for the transfer of Carl Værnet to a hospital in Copenhagen in November 1945. Værnet claimed he was suffering from a serious heart condition.
During his detention, Carl Værnet - unlike the other prisoners - was allowed to communicate with the outside world, including with business people who were working to market his hormone therapies world wide. These included therapies to "cure" homosexuality (!). The evidence clearly indicates that Værnet succeeded in convincing the British military authorities, as well as Danish police officers, that his hormone therapies were morally justifiable and could be an international success. He therefore received special, privileged treatment in the POW camp.
SS Doctor Carl Peter Vaernet's Medical Experiments on Male Homosexuals at Buchenwald (1944)
Excerpt from the "Main Report" by Eugen Kogon, an Austrian who had been interned in Buchenwald as an anti-fascist political prisoner and who had worked as first medical secretary in the Block 50 medical ward at the camp.
The report was written in 1944 under the auspices of the Psychological Warfare Division of the U.S. Army, immediately after the liberation of Buchenwald. It remained unpublished until 1995.
"In fall 1944, the Danish SS major Dr. Vaernet, who had his headquarters in Prague, arrived in Buchenwald. He started a series of experiments to cure homosexuality, with the approval of Himmler and Reich medical chief of the SS Brigadier General Dr. [Ernst-Robert] Grawitz, and SS Brigadier General [Helmut] Poppendick, Berlin (via Experimental Department V, Leipzig, of the Reichsführer SS).
Implanting a synthetically produced hormone in the right side of the groin was supposed to effect a change in the sex drive. The SS doctors made terrible jokes about it; the prisoners spoke of "flintstones" that were supposed to help those implanted with them along the proper path. Vaernet also experimented with castration. It was tried on a total of about fifteen men, of whom two died. Doubtless that was a result of the operation, since one of them developed a major infection and the other died a few weeks later as a result of general weakness.
Otherwise the human guinea pigs of this special series of experiments were not treated badly. But no positive findings were ever obtained."
David A. Hackett (ed.). The Buchenwald Report, Boulder, Colo.: Westview Press, 1995, page 79.
Dr. Værnet's role in the medical abuse of gay prisoners is documented in the archives at the International Tracing Service at Arolsen (example: ITS Arolsen, book 36, folder 405). It is also cited in the books The Pink Triangle by Richard Plant (Mainstream Publishing, Edinburgh 1987) andHidden Holocaust? by Dr. Gunter Grau (Cassell, London 1995). Historical records show that Værnet castrated and implanted hormones in homosexuals in attempts to alter their sexual orientation at the Buchenwald and Neuengamme camps.
"At Buchenwald there was a doctor who tried to change them by instituting a particular gland. The operations were crude. Many died as a result of botched surgery. Others were beaten to death, drowned headfirst in water, hung by their arms till they were dead. Some were castrated ...really, the worst you can imagine."
Excerpt from: Tim Teeman, Forgotten victims of the Holocaust,
Værnet's attempted cures for homosexuality paralleled similar research and therapies officially authorised in Britain. The British mathematical genius, Alan Turing, who broke the Nazi Enigma-code during the Second World War and initiated the modern computer revolution, suffered tragically as a result of similar British attempts to "cure" homosexuality. Turing's gayness became an issue in 1952. Declared a security risk by the British authorities, he was forced to undergo a humiliating hormone treatment that was supposed to eradicate his homosexuality. Two years later, he committed suicide.
The death of Alan Turing was a result of a treatment not dissimilar to the one developed by Carl Værnet: both were premised on the idea that homosexuality was the result of a hormonal imbalance and could be "cured" by hormone therapy. This view was widespread in the medical profession until the 1950's and persisted in some medical circles until the 1970s.
From: Niels Birger Danielsen & al., Værnet, den danske SS-læge i Buchenwald, JP Bøger, Denmark, 2002.
Now the fate of one of the most notorious KZ doctor experimenting on gays is known, the question remains about who helped him flee to Argentine in 1947 and what has been undertaken by the government in the last 50 years to get the Danish Mengele home for a war-criminal trial?
Two e-mails of Cristian Værnet to Tatchell about his grandfather Original -Spanish Translation - English Con gran sorpresa lei lo del doctor Carl Værnet. Realmente no he conocido a quien, podria ser, por los datos mi abuelo (padre de mi padre). Ambos hoy estan fallecidos.
Me siento muy angustiado y tremendamente dolorido ante la posibilidad de que Carl Værnet al que se refieren en vuestras paginas haya sido mi abuelo. Creo que el nombre completo de el era Carl Peter Værnet o Peter Carl Værnet. Leo el ingles pero me cuesta escribirlo, por eso perdonen que les escriba en español. Averiguaré fecha de nacimiento y fallecimiento y en que cementerio fue sepultado y todo lo que pueda a efectos de aclarar a Uds. y aclararme a mi al respecto y les volvere a escribir.
Saludo a Uds. muy atentamente y con la esperanza de que esta persona no haya sido mi abuelo"
With great surprise read of doctor Carl Værnet. I have not really known the data my grandfather (the father of my father).
Both of these are today deceased . I feel very distressed and tremendously sorrowed of the possibility that Carl Værnet you talk about in your page was my grandfather. I believe that the complete name of the is Carl Peter Værnet or Peter Carl Værnet. I can read English but it is hard for me to write it, for that reason pardon that the writing is in Spanish. I will find out the date of birth and death and in which cemetery he was buried and everything what can clarify You and clarify me on the matter and I'll write back.
Very kind Greeting to You, and with the wish that this person was not my grandfather.
Gracias por tus e-mail y actos que ya me demuestran que eres una buena persona, por lo que me alegra haberme comunicado contigo. Respecto a que lo mismo pueden encontrar la relación con Værnet, lo se pero es algo con lo que tendre que vivir y tomar estos errores del pasado como para que nuestra generacion ni otra futura lo produzca. Gracias por tus buenas intenciones.
Con la unica persona que pude averiguar respecto a Carl, es con mi madre ya que con los hermanos de Papa, que viven en Dinamarca no mantenemos correspondencia desde que mi padre fallecio. Me hubiera gustado que papá viviera para que me aclarara todas estas dudas que hoy me surgen. De lo que mi madre recuerda o tiene anotado es:
Carl nacio en Dinamarca el 28/4/1893 se caso con Edith Frida Hamershoj y tuvo 3 hijos. El menor de ellos era mi padre. Cuando Papá tenia 3 años, Carl dejo a la familia en Dinamarca y se fue a Alemania. Contrajo nuevo matrimonio y tuvo otros tres o cuatro hijos.
Despues de la guerra vino a Argentina, viviendo y trabajando en Buenos Aires como Endrocrinologo en el Ministerio de Salud. Fallecio en Buenos Aires el 25-11-1965 y esta sepultado en el cementerio Britanico. A efectos de que puedan corroborar esto, el telefono del Cementerio es 0054 1 553-3403 Mi padre vino tambien a Argentina, pero se radico en la Provincia del Chaco ( a un poco mas de 1000 km de Buenos Aires). La informacion que teniamos respecto a Carl, es que habia trabajado en la guerra pero como medico y no en la investigaciones que comentan en las Paginas de Internet. Cuando Carl fallecio yo tenia 8 años, por lo que trataremos de comunicarnos con algunos de los hermanos de papá a efectos de tener mas informacion al respecto.
Se que lo que pueda realizar no va a borrar los sufrimientos, pero al menos espero sirva para mitigar un poco el dolor de los familiares o damnificados y que nunca mas se realicen las atrocidades que se produjeron bajo la excusa de la guerra.
Espero tambien, todos los errores cometidos, sirva a nuestra generacion y a las futuras para que no se realicen actos de lesa humanidad, discrimine o persiga a las personas ya sea por su religión, color de piel o sexualidad cualquier fuere su elección.
Thanks for your email and acts that already demonstrate to me that you are a good person, the reason why I'm happy to have communicated with you. With respect to the relation with Værnet, but he is something with which we have to live and to learn from these errors from the past so that our generation nor another future one make them. Thanks for your good intentions.
With the only person which I could learn about Carl is with my mother since with the brothers of my father, who live in Denmark we do not maintain correspondence since my father died. I would have liked that papa lived so that he could clarify all these questions to me that today arise to me. Here is what my mother remembers or has written down:
Carl was born in Denmark the 28/4/1893 married with Edith Frida Hamershoj and had 3 children. The minor of them was my father. When Papa reached 3 years, Carl left the family in Denmark and he went to Germany. He engaged in a new marriage and he had other three or four children. He after the war came to Argentina, living and working in Buenos Aires as Endrocrinolog in the Ministry of Health. Died in Buenos Aires 25-11-1965 and was buried in the Britanico cemetery. In order for you can be able to corroborate this, the telephonenumber of the Cemetery is 0054 1 553-3403. My father also came to Argentina, but I settled in the Province of the Chaco (about of 1000 km from Buenos Aires). The information we had with respect to Carl was that he had worked in the war but as a medic and nothing about the investigations of your Internetpages tell. When Carl died I was 8 years, the reason why we tried to communicate with some of the brothers of my father with the purpose of having information on the matter.
This is not going to erase the sufferings, but I hope at least it serves to mitigate a little the pain of the damaged relatives and that never again the atrocities that took place under the excuse of the war will happen.
I also hope, all the committed errors of the past is a lesson to our generation and against the future ones so that acts of inhumanity are not made, whether discrimination or persecution of people or by their religion, skin color or sexuality nobody will discriminate.
GRAVE FOUND OF NAZI DOCTOR WHO EXPERIMENTED ON GAYS
Apr 28, 1893~1965
The grave of a Nazi doctor who performed experiments on gay men in concentration camps has been located in Argentina's Britanico Cemetery.
Historical records show that Carl Peter Vaernet, a Dane, castrated and implanted hormones in homosexuals in attempts to alter their sexual orientation at the Buchenwald and Neuengamme camps.
After World War II, he escaped to South America without being tried for his crimes; he died in 1965.
Vaernet was tracked down by a Danish gay activist who used the Internet to locate Vaernet's grandson in Argentina.
The international hunt for Danish Nazi doctor Carl Peter Vaernet is over.
The well-known gay activist and news-stuntman Peter Tatchell of OutRage! in England has written letters to the Danish PM Nyrup Rasmussen and Argentinian PM Carlos Menem about the fate of Vaernet. The credit for reopening the Vaernet-case has to go to Peter Thatchell of OutRage!
IHWO then, after a very short internetsearch* on the Alta Vista search engine, found a relative in Argentina, the grandson of Carl Vaernet, who has told that both his father and grandfather are dead, and that although the gruesome past of Vaernet has been a complete shock for him, he has graciously investigated details and the cemetery of Vaernet.( Britanico row 11.A.120 ).
*Before that a wholly futile search was undertaken in 1993 when 2 TV documentaries mentioned a Danish doctor Vaernet experimenting on gays in WW2 and post WW2. It turned out that the Nazi KZ- camp doctor mentioned in the NDR 1991 production "Wir trugen ein grosses "A" am Bein" was not identical to the post-war brainsurgeon Kjeld Vaernet mentioned in the DR TV documentary from 1990. The Vaernet in the German film was his father Carl Vaernet, which disappeared to Argentina in 1947, according to Preben Hertoft who kindly gave the few details then known about Carl Vaernet. Thereafter the search fizzled with an unanswered letter to Simon Wiesenthal in Wienna asking for further details.
Now the fate of the most notorious KZ doctor experimenting on gays is known, the question remains about who helped him flee to Argentine in 1947 and what has been undertaken by the government in the last 50 years to get the Danish Mengele home for a war-criminal trial?
Carl Peter Vaernet timeline based on available sources:
Carl Peter Vaernet was born in the village of Astrup (by Aarhus) the 28th of April 1893 as the son of a wealthy horsetrader.
In august 1920 he married with Edith Frida Hamershoj and
had 3 children, of which Kjeld Vaernet was born in november 1920. The minor of them was the father of Christian Vaernet. When the father of Christian Vaernet reached 3 years of age, Carl left the family in Denmark and he went to Germany. He engaged in a new marriage with Gurli Marie (1902-1955) and he had other three children.
In December 1921 he changed name from the common name of Jensen to the unique name of Værnet.
1923 he became candidate-doctor together with later Danish Nazi-leader Fritz Clausen.
In 1929 a castrationlaw on the behest of eugenic doctor Knud Sand was adopted just before homosexuality was legalized in 1930. Carl Vaernet started his endocrinologue experiments in 1932. He used mice for his experiments while his rival Knud Sand used chickens. Sand wanted to transplant "healthy testicles" into gays, while Vaernet wanted a hormonal medical solution.
In the years 1932-1934, after practise in 2 hospitals in Copenhagen, he went to Germany (Berlin, Giessen, Vienna, Göttingen) and Paris for further studies, specializing in ultra-sound treatments.
In 1939 he in earnest started his testosterone research and in 1941 a newspaper told how the crows of hens with implantations could be heard outside his clinic.
Fritz Clausen and Carl Vaernet
In the years before the WW2 Vaernet was one of the leading society doctors in Copenhagen , but lost patients as his close contact with Danish Quisling doctor Fritz Clausen became known. Vaernet was also a friend of Reichsbevollmaechtige of Denmark Dr. Werner Best, who highly recommended him. A brother, Aage Vaernet was also a member of the Danish Nazi-party , D.N.S.A.P.
In 1942 he implanted testosterone in a gay schoolteacher with "good result" as the teacher married. He patended his "Pressling" hormone metal-tube deposit first in Copenhagen in 1943, then in Germany. The open ended tube then over 1 or 2 or more years depending on the model released its contents. This should cure the theorethical deficit of testosterone in homosexuals. He sold his clinic on Platanvej, Copenhagen to the German occupation forces and it was subsequently bombed/sabotaged by the Danish resistance Movement (Holger Danske) in 1944.
SS-Leader HEINRICH HIMMLER & Gestapo-Chief Ernst Kaltenbrunner
Through Heinrich Himmler, the doctor got an employment contract with the "Deutsche Heilmittel GmbH", a front sorting directly under SS. The contract from 15th November 1943 was signed by Gestapo-chief Ernst Kaltenbrunnerand SS Reichsdoctor /Reich medical chief of the SS Brigadier General Dr. [General Arzt] [Ernst-Robert] Grawitz. The salary was 1.500 Reichsmark a month. The patents in connection with Vaernet´s research were to be registered in Vaernets name, but with a licence of 15 years to Deutsche Heilmittel. The careful contract stating the terms of the future use of the Vaernet Cure indicates that the SS had an "Endlösung" comprising all homosexuals of the 3rd Reich in mind.
In a letter from Vaernet in Prague to his SS-employers at Deutsche Heilmittel; Possiebrader Landstrasse, the 31st of August 1944 he reports that his operations are resceduled to 2-3 weeks later due to an airraid. The first run of operations took place 13.9.1944, the second on the 8th of December 1944. "Studenternes efterretningstjeneste" claimed that a total of 30-40 kz-inmates were operated on.
The Buchenwald camp
A SS-letter to Carl Vaernet, Petersgasse 10, Prague, the 3rd of January 1945 mentions 13 operated inmates and the names of the 7 operated (out of a group of 10) on the 8th of December 1944: Reinhold, Schmith, Ledetzky, Boeck, Henze (who died), Köster and Parth. Before that, the
13.9.1944 3 patients out of 5 were operated on. 2 were catrated, 1 was sterilized at the Buchenwald camp. 2 of the patients died immediately ,(1 of these as a direct result of the operation, a big phlegmose/tissue-inflamation, the other (Henze) from infectious bowels catarr and severe emaciation the 21st of December 1944.) leaving 11 to die shortly thereafter.
The Danish SS doctor was stationed in Prague, in Petersgasse 10 owned by interned jews, had the rank of Sturmbannfuerer (Major) and reported through his superior SS-Oberfuehrer/ SS Brigadier General [Helmut] Poppendick, Berlin (via Experimental Department V, Leipzig, of the Reichsführer SS(Himmler)).
(From Josef Kohout -"Heinz Heger"/(Fredrik Silverstolpe) Fangerna med Rosa Triangel, Forfatterforlaget 1972/1984. Original source "Tagesbuchblaetter" from magazine "Humanitas" by Neudegg.) (Vaernet claimed to have developed a synthetic testosterone hormonal implant which would cure homosexuality. The SS gave him a research position, necessary funds, laboratory
facilities and the concentration camp population as experimental subjects.) The other SS-doctors joked about the matter and the prisoners talked about the "firestones" that through castration should cure the patients.
Carl Vaernet was "out of practice" from May 1945 to September 1945. He was interned by the English occupation forces in this period. From the end of May the government knew, that he had been a Sturmbannführer at the Buchenwald KZ-camp. Major R.F. Hemingway of the allied forces confirmed in a letter to the Danish Medical Association after the 29th of May 1945 that Carl Værnet was a prisoner of war and "undoubtedly would be punished as a war criminal".
The 2nd January 1946 he was at the Kommunehospitalet in a bid to secure passage to Sweden to get a special heart-treatment. Here the sickly looking Carl met his eldest son Kjeld and together they tried to sell the patended KZ-concentrationcamp-hormone to an US medical firm "DuPont" according to archivepapers. Also Carl tried to preempt an exclusion of the doctors society by resigning.
On 29th May 1945, the chairman of the Danish Medical Association
sent the Ministry of Justice an affidavit signed by a Danish police
officer who had been incarcerated in Buchenwald. This affidavit identified Vaernet as having been a serving SS officer. The report from the chairman of the Danish Medical Association
was, apparently, ignored by the Justice Ministry in Copenhagen.
In the autumn of 1945, the British handed over Dr. Vaernet to the
On 2nd January 1946, the Danish Medical Association received a
letter from Dr. Vaernet's lawyer, informing them that he had resigned
It is known that Dr. Vaernet was eventually transferred to
hospital, on the grounds that he was allegedly suffering from a heart
complaint, (which may well have
been fictional in order to facilitate his release from detention). When
did this transfer take place? Who authorised it? What independent
medical report, if any, confirmed his heart condition?
In 1946 Carl Vaernet fully legally with the permission from both the police and the rigsadvocature travelled to Stockholm and the "Serafimerlazarettet". The Danish police even gave him some Swedish currency for his trip. Later, too late, the police served an order for his arrest.
In Sweden, he made contact with the Nazi escape network, which
spirited him away to Argentina, probably in late 1946 or early 1947. In April 1947 Brigadier General Telford Tyler (1908-1998), American Chief of "Allied War Crime Commission" and chief prosecutor in Nürnberg wrote "Lægeforeningen" that he knew about Vaernet´s experiments. Through Sundhedstyrelsen Laegeforeningen sent this message to the Rigsadvokature, that laconically told that as far as they knew Vaernet was in Brazil, otherwise his address was unknown. No one asked the commission about the details of Vaernet. Danish police-officers at the Alsgade School in the later "Spider/Edderkobbesag" were exposed as corrupt, even the british leader of the Alsgade camp/school was also considered corrupt. But not a single prisoner did escape the camp. At a point after WW2 the Danish intelligence incredibly was led by SS-man Toepke, who was later removed from his post along with Argentinian "rat-line"-man Pineyro in 1947 when the operation became too obvious. The flight of Carl Vaernet could have been quite unspectacular: by air the route was the normal sceduled Stockholm/Geneva/Buenos Aires.
On 19th November 1947, the Copenhagen newspaper, Berlingske
Tidende, carried a letter from a Danish exile living in Argentina which
reported that Dr. Vaernet was working in the Buenos Aires health department.
After a few years in the health ministry in Buenos Aires he opened a private pratice at Calle Uriarte 2251, but his clinic was not as successful as in Copenhagen. He never really learned the language or to know the people. He changed name again to the spanish Carlos Peter Varnet and lived in constant terror of being found out (just like Mengele).
In 1955 he was run down by a taxi and was severely wounded with 15 fractures. He was nursed by his wife Gurli until she in September 1955 was electrocuted before a trolley-train and died. During his 1½ years stay in hospital he ran out of money, and his daughter Lull took over the care. Both Lull and Kjeld Vaernet speak of Carl´s kindness and generosity.
In 1948 the Danish medical association officially wanted Carl Vaernet excluded and in April/May 1959 they had reservations when he applied for amnesty in Denmark. Rigspolitiet the 15th of april 1959 told Laegeforeningen that serious charges remained against Vaernet. In the beginning of the 60és his son Kjeld Vaernet on a trip to a medical conference in Buenos Aires met Carl at a hospital where Carl got treatment for an ulcer. In the last months of Carl`s life in 1965 he again applied for clemency in Denmark. He was again turned down this time in official letters signed by later the president of The Sea- & Trade Court, Frank Poulsen.
And the fate of Vaernet is described by grandson Christian Vaernet: Carl Peter Vaernet died of an unknown fever 25th of November 1965. He is buried in the Británico cemetary row 11.A.120.
The photo above was taken by South American correspondent of Jyllands-Posten Jakob Rubin in October 1999. In November 1999 professor Niels Høiby also took a picture of the grave later shown in Danish TV2.
One of the German prisoners who died in Auschwitz concentration camp was Ernst Ellson, born in Duesseldorf on February 18, 1904, of Jewish religious denomination, bachelor, who resided with his parents in Essen. The vice squad, then responsible for supervising places—certain bars and, above all, public toilets—where gays regularly met, had him under observation from 1935. In mid-November 1940, Willy M., a male prostitute, was caught in the act. Under interrogation, he identified Ellson as an occasional client. The police arrested Ellson on November 22. Since he was a Jew, the criminal police, following procedure, notified the Gestapo, which brought charges. On March 14, 1941, the municipal court in Essen sentenced Ellson to four months imprisonment, with time off for the period he had already spent awaiting trial, for “perverted promiscuity” under Article 175 of the penal code. Ellson was scheduled to be released on March 23.
The criminal police regarded the sentence as too lenient. They therefore requested that the Gestapo “take the appropriate measures.” On the day of Ellson’s release from prison, he found a Gestapo agent waiting for him outside the prison gates with a “temporary preventive detention” order. On April 18, the Berlin Gestapo issued an arrest warrant on the following grounds: “Ellson . . . is a threat to the existence and security of the nation by reason of his having committed perverted promiscuity. . . . It is to be feared that, if left at large, he will persist in behavior that is harmful to the national health. . . . “ The warrant was signed by Reinhard Heydrich.
Here is how the further fate of Ernst Ellson and his parents played out: he was sent to Buchenwald Concentration Camp in a collective transport on May 16, 1941. His elderly parents were committed to the Holbeckshof transit camp in Essen-Steele on April 25, 1942, and transferred from there on July 21 to the Theresienstadt camp, where they died. Ernst Ellson was sent to Gross Rosen Concentration Camp on September 15, 1942, and transferred to Auschwitz on October 16, 1942. On November 26, 1942, the Auschwitz Concentration Camp commandant’s office notified the Gestapo in Duesseldorf that Ernst Ellson “ . . . died of pneumonia in the camp hospital on November 23, 1942 at 9:30 a.m. in the morning. . . . The family should be informed that his remains were cremated at the cost of the state and the urn deposited at the urn cemetery at the crematorium here.”
Both the reason for Ernst Ellson’s death and the cemetery where his ashes were cremated were fictitious. It bears remarking that this gay Jew survived a year and a half in the Buchenwald and Gross-Rosen camps. Five weeks in Auschwitz were enough to finish him off. This indicates the conditions and rigors prevailing in Auschwitz, in comparison to other camps.
Homosexuals also reached Auschwitz concentration Camp as “political” principles. Some of them were arrested for political reasons with no reference to their sexuality. Others managed to change their prisoner category on such occasions as transfers between camps. This is what Karl Gorath did. He wore the pink triangle in Neuengamme Concentration Camp, where he was assigned to labor as a fleger. He and four of his fellow flegers were transferred to Auschwitz at the beginning of June 1943. On June 11, he was registered in Auschwitz Concentration Camp as a schutzhäftling, that is, a political prisoner. He was evacuated to Mauthausen in January 9145, and survived. He settled in West Germany after the war.
In his camp memoirs, Gorath recounts that he was made a block supervisor in Auschwitz. He made friends there. Two younger Poles, Tadeusz and Zbigniew, became his lovers. He came back to the site of the Auschwitz camp 50 years later to show his gay friends the small room he lived in on the top floor of one of the blocks.
He told them: “I had my own room as a block supervisor . . . it was right here . . . this is where I spent the happiest days of my life . . . with Zbigniew. . . .” His voice broke off when he spoke with tears in his eyes about how only once in his life he experienced such deep love from another man, and that it was “here, in the camp, among all the misery surrounding us, never before, and never again—never more: I met the love of my life in Auschwitz.”
Later, he learned in the Archives of the Auschwitz-Birkenau State Museum that Tadeusz and Zbigniew had both died in Auschwitz. Before going home to Germany, he and his friends placed a wreath at the foot of the memorial in Birkenau, in memory of all the gay victims of Nazism. Next to the wreath, Karl left a small bouquet of pink roses with a handwritten note that read: “To my comrades Zbigniew and Tadeusz – from Karl.”
533 The first anti-homosexual law is passed in England (25 Henry 8, chapter 6), which "adjudges buggery a felony punishable by hanging until dead. The Buggery Act was piloted through Parliament by Thomas Cromwell in an effort to support Henry VIII's plan for reducing the jurisdiction of the ecclesiastical courts.... It was on the books primarily as a symbolic token of the supremacy of the secular courts over the ecclesiastical courts." 1624 The Buggery Act of Henry VIII is adopted by the original 13 American colonies. In 1624 Richard Cornish, Master of the ship Ambrose, anchored in the James River in Virginia, was hanged "for committing sodomy with the 29-year-old cabin boy William Couse." 1772 In July, Captain Robert Jones is convicted in England for sodomizing a 13-year-old boy. The age of consent at the time is 14. Sentenced to death, he is pardoned by the King on condition that he leave the country.
During August and September the case is widely discussed in all the mainstream papers, and calls are made to reform the law. This is the first time the nature v. nurture issue is publicly debated. Homosexuals are thought to have "an inborn propensity."
In this debate, "not only the entire literate class, but even labourers who had newspapers read to them at taverns -- would have been made fully aware of homosexuality: from explicit detailed descriptions of anal intercourse and masturbation; to legal, religious, and social attitudes to homosexuality; to supposed characteristics of homosexual men; to its prevalence across society. The attitudes to homosexuality reflected in the newspapers ranged from simple stereotypical homophobia... to more complex attitudes which included a defence of homosexuality on the grounds that it was a natural trait."1821 "Mexican independence from Spain in 1821 brought an end to the Inquisition and ... homosexual oppression...
The intellectual influence of the French Revolution and the brief French occupation of Mexico (1862-67) resulted in the adoption of the Napoleonic Code. This meant that sexual conduct in private between adults, whatever their gender, ceased to be a criminal matter."1864 German lawyer Karl Heinrich Ulrichs publishes a pamphlet, Vindex: Social and Legal Studies on Man-Manly Love. He declares 'man-male love' to be inborn. Supposedly it is the natural, healthy expression of a 'female soul in a male body' - a condition he calls 'Uranism'. Those characterized by this condition he calls 'Uranians'. By means of this hypothesis, Ulrichs hopes to demonstrate the injustice of punishing sexual contact between men: Uranians do what they do because of what they are. No legislator, however, should punish people for what they are. Above all, Ulrichs wants to prevent the extension of the unreformed Prussian law against 'unnatural vice' to all German states. This threatens to occur as a result of German unification under Prussian leadership. (In Bavaria, Württemberg and Hannover the old law had already been abolished.) 1869 "German Karl-Maria Kertbeny, an Austrian-born journalist and human rights campaigner, put forward the view that homosexuality was inborn and unchangable, an argument which would later be called the 'medical model' of homosexuality.... In the course of [his] writings Kertbeny coined the word 'homosexual' as part of his system for the classification of sexual types....
Classical scholars have regretted Kertbeny's neologism ever since. The word homosexual combined a Greek prefix, homo, meaning 'same' with a Latin noun, sexus, meaning 'sex' (in the sense of gender). The rules of word-formation generally forbid combining Greek and Latin elements. Pure Greek forms would have been homoerotic and homoeroticist. The word also gives rise to confusion between the Greek homo and the Latin homo, meaning 'man,' as in homo sapiens. Many people have assumed that a homosexual is a person attracted to men, and that the word cannot therefore be applied to lesbians."1871 King Wilhelm establishes the Second Reich in Germany, adopting a harsh penal code from Bavaria, including "Paragraph 175," which outlaws "lewd and unnatural behavior." This forces Karl Ulrichs to stop publishing educational pamphlets on homosexuality. Ulrichs later flees to France and dies in 1895. 1896 German Magnus Hirschfeld, MD publishes the pamphlet Sappho and Socrates, which describes the origin of homosexuality as taking place in a bisexual embryo.
"Hirschfeld accounted for diversity in sexual orientation in terms of the bisexual nature of the developing fetus, but, in keeping with his training as a physician, he spoke of the 'brain' where Ulrichs had spoken of the 'mind.'
Hirschfeld posited the existence, in the embryos of both sexes, of rudimentary neural centers for attraction to both males and females. In most male fetuses, the center for attraction to women developed, while the center for attraction to males regressed, and vice versa for female fetuses. In fetuses destined to become homosexual, on the other hand, the opposite developmental sequence took place."1897 The first gay rights organization is formed in Germany by Magnus Hirschfeld, MD, Adolf Brand and Max Spohr. It is called the Scientific Humanitarian Committee. By 1900 they publish 23 books, as well as collecting thousands of "prominent" signatures on a petition to abolish Paragraph 175.
DATES EVENTS: 1900 - 1947 1903 The first large-scale survey on homosexuality, conducted by Magnus Hirschfeld and distributed to 6,611 German students and workers, finds that 2.2% of male respondents claim to have had sex with other men. Due to complaints, his studies are soon terminated by legal action. 1905 The Swiss psychiatrist Auguste Forel publishes his book The Sexual Question, which raises demands that are revolutionary for its time (abolition of most sex laws, marriage for same-sex couples etc.). Forel deliberately combines medical and socio-political viewpoints.
"In 1905’s Three Essays on the Theory of Sexuality, Sigmund Freud put forward sexual theories, including his thoughts on the origins and meanings of homosexuality....he saw homosexuality as the unconflicted expression of an innate instinct.... However, Freud also believed that even adult heterosexuals retain the homosexual component, albeit in sublimated form.
Freud saw adult homosexuality as a developmental arrest of childhood instincts which prevent the development of a more mature heterosexuality."1917 The Soviet Union abolishes all anti-gay legislation.
Eugen Steinach MD or Germany publishes his theories that testicular secretions in homosexual men are abnormal and that they drive brain development in a female rather than a male direction. He publishes the results of his experiment "transplanting a testicle from a heterosexual man into an 'effeminate, passive homosexual man.' According to the report, the man was totally 'cured' -- he was said to have lost all attraction to men and to have developed normal heterosexual feelings. Some further successes were reported, but eventually the procedure was exposed as ineffective." 71919 Hirschfeld establishes the Institute for Sexual Science in Berlin, which soon has 20,000 books in its library, and a staff to counsel gays and educate society.
Other gay societies are soon established, along with a community center and committees to coordinate law reform measures.1924 Biologist J.B.S. Haldane, observing some possible same-sex behavior in animals, writes "The universe is not only queerer than we suppose, it is queerer than we can suppose." Since then homosexuality has been observed in over 450 animal species. 1928 On May 14, 1928 the National Socialist Party (Nazi) in Germany issues their official view on homosexuals:
"It is not necessary that you and I live, but it is necessary that the German people live. And it can live if it can fight, for life means fighting. And it can only fight if it maintains its masculinity. It can only maintain its masculinity if it exercises discipline, especially in matters of love. Free love and deviance are undisciplined. Therefore, we reject you, as we reject anything which hurts our people. Anyone who even thinks of homosexual love is our enemy."1928 The Reichstag Committee, by a vote of 15-13, approves the Penal Reform Bill, which abolishes homosexual crimes. The German Communists support this vote. Before the law could be put into effect the stock market crashes, the Bill is tabled, and the Nazis come to power. 1930 Hirschfeld visits the U.S., delivering a series of lectures to medical groups, advocating for the decriminalization of same-sex acts. 1932 Until this year, Hitler tolerates some gay Nazis, especially Ernst Rohm, who is head of the Brownshirt troopers, whom Hitler needed as an ally. By 1932, Rohm's group had grown to 500,000 members, and Hitler felt threatened. An assassination attempt fails, and Rohm flees to Bavaria. 1933 February: All gay bars and hotels are closed in Germany.
March: The West German Morality League begins a campaign against Homosexuals, Jews, Negroes and Mongols.
May 6: "A Nazi goon squad plunders Hirschfeld's Institute for Sexology, which is then promptly closed by the authorities. The library is publicly burned four days later together with the books of other 'Un-German' authors like Freud, Brecht, Feuchtwanger, Werfel and Stefan Zweig. Most sexologists lose their opportunities to work, because they are Jewish. They flee into exile."1934 On June 30, 1934, which is now called "The Night of the Long Knives," Hitler's troops raid a Bavarian resort and arrest Rohm, who is later shot. Simultaneously, 200 Brownshirt leaders suspected of homosexuality and allegiance to Rohm are rounded up and shot. The same day Hitler gives the order to purge all gays from the army. A law is passed requiring sterilization of all homosexuals, schizophrenics, epileptics, drug addicts, hysterics, and those born blind or malformed. 1936 As part of a clean-up campaign to prepare for the Munich Olympics, homosexual meeting places are raided and homosexuals are sent to concentration camps. All activities of the League of Human Rights are banned. 1938 Alfred C. Kinsey, a zoologist at Indiana University, begins his "mostly sociological studies of human sexual behavior." 1940 "What happened around 1940 ... more and more of the mass of the population began to identify as 'heterosexual' and see any homosexual behavior as transgressive; and secondly among self-identified 'queers' a shift in desired sexual partner took place. Previously 'queers' tended to prefer 'male' men but now 'queers' began to prefer other 'queers' as sexual partners." 1942 The Reich Ministry of Justice publicly adopts the death penalty for homosexuals.
Civilian records in Nazi Germany reveal 46,436 homosexuals were convicted and imprisoned under Paragraph 175 between the years of 1933-1943 (no records exist for the remainder of WWII).
DATES EVENTS: 1948 - 1968 1948 Alfred Kinsey, et al. publish Sexual Behavior in the Human Male, which states:
"Males do not represent two discrete populations, heterosexual and homosexual. The world is not to be divided into sheep and goats. It is a fundamental of taxonomy that nature rarely deals with discrete categories... The living world is a continuum in each and every one of its aspects..."
The book shocks Americans, especially with its claim that 10% of the population could be homosexual.1950s "In the late 1940s and early 1950s, Republican demagogues charged that homosexuals had infiltrated the federal government under the Roosevelt and Truman administrations and that they posed a threat to national security. They considered communists and homosexuals both to be morally weak and psychologically disturbed. They also argued that homosexuals could be used by the communists—blackmailed by them—into revealing state secrets. This set off a Lavender Scare that affected the lives of thousands of Americans.
Much of the vast apparatus of the Cold War loyalty/security system, initiated under the Truman administration and expanded under the Eisenhower administration, was focused on ferreting out and removing both communists and homosexuals from government positions. Civil servants describe horrendous interrogations by government security officials about their sex lives. Merely associating with 'known homosexuals'” or visiting a gay bar was considered strong enough evidence for dismissal....
Though a congressional committee spent several months in 1950 studying the threat homosexuals allegedly posed to national security, they could not find a single example of a gay or lesbian civil servant who was blackmailed into revealing state secrets-not one. Subsequent studies have confirmed this. But the myth of the homosexual as vulnerable to blackmail and therefore a security risk endured for decades."1950 The Mattachine Society, considered the first modern gay rights organization, is formed in Los Angeles on November 11. "Harry Hay, the founder of the Mattachine Society in California, knew of the homosexual purges going on in Washington as early as 1948. He feared that as the Cold War with the Soviet Union escalated and American society took on a wartime footing, the purges would spread to the private sector and gays and lesbians would find it impossible to find employment. It was this sense of an "encroaching American fascism” that inspired him to found the Mattachine Society in 1950-1951. Working in a defense industry plant in Los Angeles, Hay understood the power of the federal government in setting employment policies." 1952 "In the 1952 presidential election, Republican campaign rhetoric portrayed Eisenhower and Nixon as 'God-fearing men' who were 'for morality.' They promised to clean up the mess in Washington, including the immorality in the State Department. Their Democratic opponent, Adlai Stevenson, was portrayed as an intellectual egghead with a 'fruity' voice. The rumors that Stevenson was a homosexual were so widespread that the tabloid magazine Confidential ran a cover story about 'How that Stevenson rumor started.' Because of the innuendo that permeated the campaign, some gay men at the time considered Stevenson the first gay presidential candidate." 1957 Dr. Evelyn Hooker publishes The Adjustment of the Male Overt Homosexual, "in which she administered psychological tests to groups of homosexual and heterosexual people and asked experts, based on those tests alone, to select the homosexuals. The experiment, which other researchers subsequently repeated, demonstrates that homosexuals are no worse adjusted than the general population, and therefore being in their right minds would not, given an option, have chosen homosexuality over the more socially acceptable heterosexuality.
As a result of her studies and the verifications thereof, the American Psychiatric Association removed homosexuality from its handbook of disorders in 1973."1955 Four lesbian couples in San Francisco found the Daughters of Bilitis, the first gay organization exclusively for women. "It was conceived as an explicitly lesbian alternative to other homophile groups of that era such as the Mattachine Society.... was influential throughout the 1950s and 1960s but was torn apart by factionalism in the 1970s. Its members split over whether to give more support to the gay rights movement or to feminism. 1960s "Washington, D.C. became the center for a new militancy in the gay movement by the early 1960s. It was there, as gay men and lesbians began to organize and challenge the federal government's discriminatory policies, that they developed much of the rhetoric and tactics of the gay rights movement....
Perhaps the two most important tactics the Mattachine Society of Washington initiated were the use of public demonstrations and court suits. Public demonstrations like the 1965 picket in front of the White House were an effective way of garnering publicity for their cause. And legal challenges ultimately proved the most effective means of dismantling the government's anti-gay policies. Courageous men like Bruce Scott and Clifford Norton challenged their dismissals and won-suggesting that the courts were the best means of protecting the civil liberties of gay men and lesbians."1962 As the U.S. war in Vietnam picks up momentum, some young men pretend to be gay in order to stay out of the armed forces. Rock legend Jimi Hendrix, although considered to have a "legendary appetite for women ... complained that he was in love with one of his squad mates" in order to obtain a discharge. 1968 "The British scholar Mary McIntosh investigates 'The Homosexual Role', coming to the conclusion that homosexuality is not a definite biological or psychological condition of certain individuals, which distinguishes them from everyone else, but rather a label attached to them by others and/or by themselves. It is a socially constructed role which is played voluntarily or involuntarily by some men and women, but not by others whose actual sexual behavior may not be much different. Ideas such as this eventually lead to a dispute between 'essentialists' (mostly natural scientists), who continue to believe in some essential homosexuality, and 'constructionists'" (mostly social scientists), who no longer share this belief."
DATES EVENTS: 1969 - 1989 1969 June 28: "[E]ight police officers, at approximately 1:20 a.m., on now what was very early Saturday morning, June 28, 1969, swooped in and raided the [Stonewall Inn], arresting some employees and 'inappropriately' attired patrons. As those arrested were being carted off to the police wagon, however, the crowd, which in the past had scattered upon the arrival of the police, stayed, observed and then violently reacted. What exactly triggered the ensuing riot is unknown. The different theories include the resistance of an arrested transsexual, the arrest of a lesbian dressed as a male or the beating of gays by the police. The truth is probably a combination of the three, the individual events inflaming different members of the crowd, which, for whatever unknown reason, decided that this evening was the one in which a stand would be taken. Bricks, bottles and all objects capable of being used as weapons were hurtled at the police, forcing them to retreat back into the inn. Lighter fluid was thrown through the broken windows of the bar, followed by matches in an attempt to ignite the flammable liquid.
Shortly before 3 a.m., the Tactical Patrol Force (TPF), New York’s highly trained and armed riot unit, arrived to find the crowd unwilling to bend to its force. As they charged through the crowd, people merely doubled back behind the troopers.
Approximately 30 minutes after the arrival of the TPF, calm prevailed... Thirteen people were arrested. Many others in the crowd, while not arrested, suffered injuries, as did four police officers. The inside of the Stonewall Inn was destroyed.
Word of the resistance quickly spread and the following evening, over a thousand people appeared on the scene, as did the police and the TPF. A second night of rioting ensued, ending at approximately 4 a.m. While Monday and Tuesday were quiet, perhaps due to the inclement weather, the masses returned yet again on Wednesday, the final day of the resistance, which resulted in further clashes with the police.
Though resistance to police harassment and abuse occurred prior to the historic events at Stonewall... it is this series of evenings in 1969, in which people fought back, that crystallized a tangible, organized movement. From those days on, a once splintered group coalesced." 1970 June 28: "In commemoration of the Stonewall Riots, the GLF organizes a march from Greenwich Village to Central Park. Between 5,000 and 10,000 men and women attend the march. Many gay pride celebrations choose the month of June to hold their parades and events to celebrate 'The Hairpin Drop Heard Round the World.'" 1973 December: The Board of Trustees of the American Psychiatric Association declares "by itself, homosexuality does not meet the criteria for being a psychiatric disorder." 1975 July 3: The U.S. Civil Service Commission decides to consider employment applications by lesbians and gay men on a case-by-case basis. 1977 Feb, 7: The U.S. State Department announces that it will begin considering job applications from lesbian and gay men for employment in the foreign service.
Nov, 8: Harvey Milk, "despite a national climate of hostility against gay people, he ran for office several times. He emerged as a figurehead for San Francisco's large gay community, and was called the 'Mayor of Castro Street.' He was elected city supervisor in 1977, the first openly gay elected official of any large city in the US.51978 June 25: Artist Gilbert Baker, now known as the gay Betsy Ross, creates the Rainbow Flag. He designs the flag as a positive alternative to the Pink Triangle -- a symbol first used by the Nazis to identify homosexuals.... The original Rainbow Flag had eight stripes: fuchsia; red; orange; yellow; green; turquoise; blue; and, purple -- which represent sex, life, healing, sunlight, nature, magic serenity and spirit.
Nov, 27: Former San Francisco Supervisor Dan White shoots and kills Mayor George Moscone and openly gay supervisor Harvey Milk. "White had resigned previously following the enactment of a gay rights bill which he had opposed.... Harvey Milk is widely regarded as a martyr for the gay community and the gay rights movement. Many Queer community institutions are named for Milk, including the Harvey Milk Institute and the Harvey Milk Lesbian, Gay, Bisexual and Transgender Democratic Club in San Francisco, as well as a number of Queer-positive alternative schools in the United States, including Harvey Milk School in New York City."1979 May 21: Dan White is convicted of voluntary manslaughter on the grounds of diminished responsibility and sentenced to seven years and eight months, a sentence widely denounced as lenient and motivated by homophobia. (White later committed suicide while on parole.) After the sentence, the gay community erupted into the White Night Riots; more than 160 people ended up in the hospital. 1981 June 5: AIDS (acquired immunodeficiency syndrome) is first reported in the United States when the Center for Disease Control reports that in the period October 1980-May 1981, 5 young men, all active homosexuals, were treated for biopsy-confirmed Pneumocystis carinii pneumonia at 3 different hospitals in Los Angeles, California. 1986 June 30: The U.S. Supreme Court, in Bowers v. Hardwick, 478 U.S. 186, upholds "the constitutionality of a Georgia sodomy law that criminalized oral and anal sex in private between consenting adults." 1987 Mar. 14: The AIDS Coalition to Unleash Power (ACT-UP) is formed in March of 1987 at the Lesbian and Gay Community Services Center in New York. Three weeks later they hold their first protest on Wall Street. "ACT-UP uses non-violent direct action and often civil disobedience to bring attention to the AIDS crisis. ACT-UP also sought to stem the spread of HIV by engaging in frank public discussions about AIDS, sexuality and sexual practices.
They are well known for their provocative demonstrations and their famous slogan/logo 'Silence = Death' with an inverted pink triangle, which is reminiscent of the pink triangle assigned to accused homosexual men in Nazi prison and death camps.s formed in New York City by Larry Kramer and about 300 other activists."
DATES EVENTS: 1990 - Present 1991 June: In "A Difference in Hypothalamic Structure Between Heterosexual and Homosexual Men" published in Science(vol. 253, pages 1034-7) Simon LeVay et. al. state that the hypothalamus in the brains of gay men were a different size than those of straight men.
December: J.M. Bailey and R.C. Pillard et. al, publish a study of twins, concluding that "of the relatives whose sexual orientation could be rated, 52% of monozygotic cotwins, 22% of dizygotic cotwins, and 11% of adoptive brothers were homosexual." This study leads some to contend that homosexuality (since if it were totally inherited it would be 100% in monozygotic twins) is determined by both pre- and post-birth determinations.1993 July: Dean H. Hamer, PhD et. al, publishes their findings of "a correlation between homosexual orientation and the inheritance of polymorphic markers on the X chromosome," pointing toward at least some inheritance of sexual orientation. 5
December: The Clinton administration institutes its "Don't ask, don't tell" (DoD Directive 1304.26) policy for gays and lesbians in the military. "The policy requires that as long as gay or bisexual men and women in the military hide anything that could disclose sexual orientation, commanders won't try to investigate their sexuality."1996 Sep. 21: President Bill Clinton signs the Defense of Marriage Act, which bars same-sex partners from receiving federal spousal benefits. The Defense of Marriage Act (DOMA):
- allows each state (or similar political division in the United States) to recognize or deny any marriage-like relationship between persons of the same sex which has been recognized in another state.
- explicitly recognizes for purposes of federal law that marriage is "a legal union of one man and one woman as husband and wife" and by stating that spouse "refers only to a person of the opposite sex who is a husband or a wife."
Sep. 20: President Bill Clinton signs executive order banning anti-gay discrimination against any federal civilian employee.1999 Dean H. Hamer, PhD of the U.S. Laboratory of Biochemistry, National Cancer Institute, National Institute of Health publishes an article in Science which proposes that "sexual orientation is a complex trait that is probably shaped by many different factors, including multiple genes, biological, environmental, and sociocultural influences."
Bruce Bagemihl, PhD published his book Biological Exuberance, Animal Homosexuality and Natural Diversity, (New York: St. Martin's Press, 1999), which chronicles homosexual and/or transgender activity in over 450 species of animals.
Anthony Bogaert, PhD, published an article in Archives of Sexual Behavior (Vol. 28, No. 3, pp. 213-221) which stated: "The relation between sexual orientation and penile dimensions in a large sample of men was studied...On all five measures, homosexual men reported larger penises than did heterosexual men...Alterations of typical levels of prenatal hormones in homosexual men may account for these findings."
2002 Sep. 16: Toshihiro Kitamoto et. al. publish their findings that they have found a way to "switch" homosexual behavior on and off in male fruit flies. The researchers were able to do this by temporarily disrupting synaptic transmissions in the flies. 2003 June 30: "U.S. Supreme Court [in a 6-3 decision, Lawrence v. Texas, 539 US 558] strikes down Texas sodomy state law banning private consensual sex between adults of the same sex. The court found that law and others like it violated the due process clause of the 14th Amendment. But legal analysts said the ruling enshrines for the first time a broad constitutional right to sexual privacy."
June 30: Dr. Rina Agrawal et. al. presents her findings at the annual conference of the European Society of Human Reproduction and Embryology that "Lesbians are more than twice as likely to suffer from a hormone-related condition [Polycystic ovary syndrome], fueling theories that hormones play a role in developing their sexuality."
October: Robert L. Spitzer, MD, et. al. publishes an article in the journal Archives of Sexual Behavior (Vol. 32, Issue 5, pp. 403-417) which concludes that "there is evidence that change in sexual orientation following some form of reparative therapy does occur in some gay men and lesbians."
October: Qazi Rahman, PhD, et. al. publishes a study which examined the eye blink startle responses to acoustic stimuli of 59 healthy heterosexual and homosexual men and women. It concluded that "homosexual women showed significantly masculinized PPI [eye blink] compared with heterosexual women, whereas no difference was observed in PPI between homosexual and heterosexual men." Dr. Rahman stated in an interview that "because the startle response is known to be involuntary rather than learned, this strongly indicates that sexual orientation is largely determined before birth."2004 Feb. 12: "In an open challenge to California law, city authorities performed at least 15 same-sex weddings Thursday [2/12/04] and issued about a dozen more marriage licenses to gay and lesbian couples. By midafternoon, jubilant gay couples were lining up under City Hall's ornate gold dome and exchanging vows in two-minute ceremonies that followed one after another.
Mar. 15: "A bloc of more than 50 Islamic states, backed by the Vatican, sought today to halt U.N. efforts to extend spousal benefits to partners of some gay employees. The initiative came less than two months after U.N. Secretary General Kofi Annan moved to award benefits to partners of gay employees who come from countries where such benefits are provided, such as Belgium and the Netherlands."
May 17: "Gay couples began exchanging vows here Monday [5/17/04], marking the first time a state has granted gays and lesbians the right to marry and making the United States one of four countries where homosexuals can legally wed."gay men respond in the same way as women. 27
June 3: A study published by Ebu Demir and Barry J. Dickson of the Institute of Molecular Biotechnology of the Austrian Academy of Sciences show that a single gene in the fruit fly is sufficient to determine all aspects of the flies' sexual orientation and behavior. 28
June 30: The Spanish Parliament gives final approval to a bill legalizing same-sex marriage.
July 20: Canada signs gay marriage legislation into law, becoming the fourth nation to grant full legal rights to same-sex couples.2006 June: Anthony Bogaert, PhD, in a 2006 article for the journal Proceedings of the National Academy of Sciences (Vol. 103, No. 28, pp. 10771-10774), conducted a study of 905 men and their siblings and found that the only significant factor for homosexuality in males was the number of times a mother had previously given birth to boys. Each older male sibling increased the chances of homosexuality by 33%. 2011 Jan. 1: The repeal of a 60-year-old California law (Welfare and Institutions Code Section 8050) requiring state health officials to seek a "cure" for homosexuality goes into effect.
The Berliner Tageblatt
March 4, 1933
The Berliner Tageblatt [Berlin Daily] Lists the Gay and Lesbian Bars Closed by Berlin's Chief of Police (March 4, 1933)
The Nazi regime regarded homosexual men as enemies of the people, since their "unnatural sex acts" and "refusal to procreate" supposedly endangered the nation's survival. The allegedly infectious "social epidemic" of homosexuality was to be completely eradicated. As an initial step toward this goal, the Nazis forced the widespread closure of gay and lesbian institutions. This news article from the March 4, 1933, edition of the Berliner Tageblatt[Berlin Daily] lists the names of addresses of various Berlin night clubs closed by decree of the city’s chief of police.
Berlin Transvestite Bar Closed
1933 Election Campaign: Hitler’s Election Posters Cover the Front of "Eldorado," a Berlin Transvestite Bar Closed by the Police (Early March 1933)
In the 1920s, Berlin had become famous for its liberal, bohemian atmosphere and its sexual permissiveness – just two of the many reasons why so many artists had been drawn to the city in those days. But “public morality” changed very quickly under Hitler. In March 1933, Berlin’s legendary transvestite bar “Eldorado” was closed by decree of the city’s chief of police. In the photograph below, the windows of the famous Kalckreuthstraße bar have been covered over by swastikas and NSDAP election posters: “Vote for Hitler – List 1.” Shortly thereafter, many other bars known as meeting places for gay men and lesbians were closed in response to “moral complaints.” In 1935, Article 175 of the Reich Criminal Code (which criminalized homosexuality) was tightened, and homosexual acts became subject to more severe forms of punishment. Many of the 50,000 homosexuals sentenced under Article 175 wound up in prison or concentration camps.
Black and Queer in Nazi Germany
February 2010 Author: Rev. Irene Monroe
Missing from the annals of African American history are the documented stories and struggles of African Americans, both straight and “queer,” in Nazi-era Germany. Valaida Snow, captured in Nazi-occupied Copenhagen and interned in a concentration camp for nearly two years, is one such story that is forgotten every Black History Month in celebrating our heroes and survivors.
Born in Chattanooga, Tennessee, Valaida Snow came from a family of musicians and was famous for playing the trumpet. Named “Little Louis” after Louis Armstrong (who called her the world’s second best jazz trumpet player — besides himself, of course) Snow played concerts throughout the U.S., Europe, and China. On a return trip to Denmark after headlining at the Apollo Theater in Harlem, Snow, the conductor of an all-women’s band, was arrested for allegedly possessing drugs and sent to an Axis internment camp for alien nationals in Wester-Faengle.
While in pre-Hitler Germany all-female orchestras were de rigeur in many avant-garde entertainment clubs, these homosocial all-women’s bands created tremendous outrage during Hitler’s regime. Snow was sent to a concentration camp not only because she was black and in the wrong place at the wrong time, but also because of her “friendships” with German women musicians, implying lesbianism.
Although laws against lesbianism had not been codified, and lesbians were not criminalized for their sexual orientations as gay men were, these women were nonetheless viewed as threat to the Nazi state and were fair game during SS raids on lesbian bars, sentenced by the Gestapo, sent to concentration camps, and branded with a black triangle. As a matter of fact, any German woman, lesbian, prostitute, or heterosexual, not upholding her primary gender role — “to be a mother of as many Aryan babies as possible” — was deemed anti-social and hostile to the German state.
Because Nazis could not discern between the sexual affection and social friendship between straight and lesbian women, over time they dismissed lesbianism as a state and social problem, as long as both straight and lesbian women carried out the state’s mandate to procreate.
Nazi Germany’s extermination plan of gay men is a classic example of how politics informed their science. Paragraph 175 of the German Criminal Code differentiated between the types of persecution non-German gay men received from German gay men because of a quasi-scientific and racist ideology of racial purity. “The polices of persecution carried out toward non-German homosexuals in the occupied territories differed significantly from those directed against Germans gays,” wrote Richard Plant in The Pink Triangle: The Nazi War Against Homosexuals. “The Aryan race was to be freed of contagion; the demise of degenerate subjects peoples was to be hastened.”
Hans J. Massaquoi, former Ebony Magazine editor, and the son of an African diplomat and white German mother, in his memoir Destined to Witness: Growing Up Black in Nazi Germany, depicts a life of privilege until his father returned to his native Liberia. Like all non-Aryans, Massaquoi faced constant dehumanization and the threat of death by Gestapo executioners. “Racists in Nazi Germany did their dirty work openly and brazenly with the full protection, cooperation, and encouragement of the government, which had declared the pollution of Aryan blood with ‘inferior’ non-Aryan blood the nation’s cardinal sin,” he wrote. Consequently, the Gestapo rounded up and forcibly sterilized and subjected many non-Aryans to medical experiments, while other just simply mysteriously disappeared.
There was no systematic program for elimination of people of African descent in Nazi Germany from 1933 to 1945 because their number were few, but their abuses in German-occupied territories, like the one in which Snow was captured, were great and far-reaching.
After eighteen months of imprisonment, Snow was one of the more fortunate blacks to make it out of Nazi Germany, released as an exchange prisoner. She was, however, both psychologically and physically scarred from the ordeal and never fully recovered. Snow attempted to return to performing but her spark, tragically, was gone.
Gays & Lesbians Risked Their Lives
It's a little known, and not always published, fact there were gays and lesbians who risked their lives during the nazi occupation. Some names are mentioned in the literature, many from artists circles.
Student Han Stijkel was the religion-inspired leader of one of the first underground groups, the ‘Stijkel group’ – he did not call himself gay.
Danser and poet Karel Pekelharing was involved in the attack on the prison at the Amsterdam Weteringschans.
Tailor Sjoerd Bakker, together with painter and writer Willem Arondéus, took part in the attack on the Amsterdam population registry.
Willem Arondéus was candid about his homosexuality and wanted to prove that gays and lesbians could be as brave as anybody else.
All four of those mentioned above were executed.
The lesbian cellist and conductor Frieda Belinfante was a leading figure in the artists resistance. Dressed up as a man she participated in the resistance and escaped the Gestapo.
The couple Ru Paré and Do Versteegh saved over fifty Jewish children.
The famous socialist and homo-erotic writer Jef Last was part of the resistance group 'De Vonk' (The Spark). He was in contact with the editors of the magazine 'Levensrecht' (The Right to Live) (1940), which after the war formed the basis for the COC (Dutch Gay Organisation).
Nico Engelschman and Jaap van Leeuwen were active in several resistance groups. Jaap Diekmann went underground, was caught and worked as a forced labourer in Germany.
Other COC-members included
Gé Winter, who saved his Jewish friend Van Spiegel and his family, and the publisher and interpreter Henri Methorst from The Hague, who kept psychiatrist Coen van Emde Boas and his wife out of nazi hands.
In Groningen tobacco manufacturer Willy Niemeijer, was involved in underground activities and later perished in Neuengamme concentration camp.
In Amsterdam the Castrum Peregrini was active, a safe-house for Jewish refugees around the German intellectual Wolfgang Frommel, follower of the poet Stefan George. Homo-eroticism was a conceiled theme in their circle.
The poets Percy Gothein and Vincent Weyand were arrested on grounds of homosexuality and died in a concentration camp.
The writer Wolfgang Cordan was the leader of an armed resistance group.
The exposition 'Who can I still to trust? - Being Gay in nazi-Germany and occupied Holland' shows pictures of Frans Toethuis and his Jewish friend, who was arrested and killed en whose name is unknown.
Unlike Jews and Gypsies, gays and lesbians were not threatened with eradication. They were not that easy to find. However, the occupiers, following the German model, did strive to completely supress this 'unworthy and anti-reproductive' behaviour.
As early as August 1940 the German anti-gay laws were introduced in Holland. Sexual acts between all men, not only between adults and minors, became an offence. For the latter the punishment was a maximum of ten years in jail, for the former a maximum of four years in jail. Minors could also be punished. The regulation (81/40) ‘forgot’ to include homosexual actions between women. A central registration was started, using 'pink' lists. These had to be provided by local investigation departments. As early as 1920 brigadier Jasper van Opijnen was appointed to the Amsterdam vice squad to check the activities of gays and lesbians. The German approach could be directly linked up to this. When leaving the police service in 1946, Van Opijnen was called 'homoführer' in a song by his colleagues.
NIOD-researcher Anna Tijsseling, at the reopening of the exposition 'Who can I still trust? ‘ (Public Library Amsterdam, May 5, 2010), gave an example of a gay case at the The Hague police department. In June 1943, in the Zuiderpark, a ten year old asked chief inspector Lesage for help. He told him to have been sexually intimidated, with his friends, by an older youth. That boy had ordered them, ‘in behalf of the German Wehrmacht’, to drop their pants. Lesage takes the 16 year old, who is confirming the smaller boy’s story, to the police office. The inspector brings the case to his colleague of Public Morality, Auke Anema. Mr. Anema is convinced that the youth must have learned such behavior from adult homosexuals (the ‘Dracula’-theory). The 16-year old boy agrees, of course, and gives the name of a 27 year old tailor. The man is found and he confesses his influence. After a talk with the parents of the boy the police closes the case. Nobody is punished. In nazi-Germany these boys would have been disciplined, and sometimes been sent to a camp.
Jewish gay men and women
There was not a systematic persecution like in Germany. For Jews however, every offence could be life-threatening. In the register of the Bureau Joodsche Zaken (Office for Jewish Affairs) a woman is mentioned, Mina Sluyter, who is arrested 'because of homosexuality'. From a letter by the Amsterdam vice squad and its reports to the Bureau of Jewish Affaires the names of several Jewish gays and a lesbian woman are known. They were arrested by Van Opijnen and his colleagues during the period 1941-1943 and immidiately handed over to the Sicherheitsdienst. All but one were murdered in Auschwitz or Sobibor. There has not been any research on their position in camp Westerbork. Pink and other triangles did not exist there, but the yellow 'Jew-star' did. Jews arrested for being homosexual were supposed to wear a pink-yellow David-star.
Apart from the names in the Amsterdam vice squad letter, also some Jewish gays are mentioned in a letter by esquire Schorer (see article by Marina van der Klein, www.vertrouwen.nu/reactie_MariavdKl.htm). They are Engers, Hiegentlich, Petermeijer and Sjouwerman. A lot is known about Jakob Hiegentlich, not much (yet) about the others. Hiegentlich was a Catholic-Jewish author who foresaw the coming persecution and took his own life. Another gay Jewish man, Hugo van Win, went into hiding as a forced labourer in Germany under the name Bertus de Witte. He witnessed homo-scene in heavily bombed Berlin, which was never completely eradicated. The lawyer and poet L. Ali Cohen from Haarlem survived the war.
Between 1940 and 1945 ninety non-Jewish men had been sentenced because of homosexual activities, sometimes because of sexual contact with German soldiers or officials. They ended up in regular Dutch prisons. Historian Koenders was able to determine three cases of gays who had been deported to Germany. There they did not wear a pink triangle, but they may have worn the H-emblem ('Holländer').
The Netherlands Indies
An even lesser known chapter is the position of gays in the Netherlands Indies in the thirties and fourties of the 20th century. A black period was the raid on gays in the colonial elite between November 1938 and January 1939. The raid was instigated by the newspaper Javabode, whose chief editor was a sympathizer of the NSB (Dutch National Socialist Movement), and the Christelijke Staatspartij (Christian State Party). The Resident of Batavia and Head of Police, Fievez de Mailines van Ginkel, was among the victims. The most famous of the 223 people arrested was the renowned German artist Walter Spies, who lived on the island of Bali and who died in 1942 as a prisoner of war.
Willem Johan Cornelis Arondéus
Willem Arondéus (Source: www.ushmm.org)
”Willem Arondéus was born in Naarden (22 August 1894). He grew up in Amsterdam where his parents had a costume rental business for actors. At the age of thirteen he was admitted to the Quellinus School, which later became the Rietveld (Art) Academy in Amsterdam, where he devoted himself to decorative painting. After completing his education he lived in various places in and outside of the province of Noord-Holland. During the time he lived in the 'Gooi' area he met other artists and befriended the poet Adriaan Roland Holst.
After a short stay in Paris he moved to the island of Urk 1920 and later to Breukelerveen. He illustrated poems, received commissions for posters and calendars and designed Christmas stamps or charity stamps which were published by the Dutch postal services in 1923. In the same year he received a commission to make a wall painting for the town hall of Rotterdam, his break-through as a visual artist. In general this work is seen as influenced by the visual artist Richard Roland Holst, a man he admired and who inspired him. He made the engraving shown below for the poem 'The Dying' by his other supporter and inspiration Adriaan Roland Holst, Richard's brother.
The Dying. Drawing rhyme print (19x19 cm, pen/gold paint)
with thanks to J.Versteegh, coll. Pygmalion-art.com Tapestry of Arondéus
Picture: M. Eijkhoudt Between 1930 and 1932 he made nine tapestries with decorations around the coats of arms of the cities of Noord-Holland for the county hall. The following year he received a commission for a wall painting for the health clinic of the City Health Departement in Amsterdam. On three walls he portrayed the hunt, fishery, shipping and agriculture.
Life as a visual artist did not come easy, however. He stuck with his specifie style of painting which was already considered outdated in his time and his artistic work barely paid for his living expenses.
Around 1935 he turned away from the visual arts and he devoted himself to writing. His debut in 1938 was the novel The owl house for which he received an award from publisher Kosmos. His next novel In the Flowering Winter Radish was also received well, although critical sounds were heard about his style of writing. In 1939 his first art history book was published, a biography of the painter Matthijs Maris. In general this is considered his best work. In her review of the post-war reprint of the book Annie Romein-Verschoor puts Arondéus in line with great stylists like Abraham Kuyper and Johan Huizinga” (www.inghist.nl).
Arondéus, gay and resistance fighter
“Arondéus was a remarkable and obstinate man from Noord-Holland who, as early as 1914 at the age of twenty, contrary to accepted custom openly talked about his homosexuality. In those days, even in the circles he frequented, this was a bit too much for many people" (nl.wikipedia.org). There was a continuous inner struggle in himself as well.
"In the documentary by Rudi van Dantzig, The Life Of Willem Arondéus 1894-1943(Arbeiderspers 2003, 446 p.) Arondéus is quoted as saying: "It's like I'm living in a blackout - without sorrow and without joy." In his often very depressing diary notes Arondéus makes the reader witness to a highly torn and lonely existence. Despite his ambivalent friendships with, for instance, Adriaan and Richard Roland Holst and resistance people like Willem Sandberg and Gerrit van der Veen, Arondéus remained a shadowy figure, even after he took part in the attack on the Amsterdam registry. Arondéus was gay, and considering the morality of the society around 1920, his frankness about this can be seen as his first act of liberation" (www.intermale.nl).
Arondéus with fishermen from Urk (Source: www.gaynews.nl)
"In 1932 Arondéus and green grocer Gerrit Jan Tijssen became friends. They experienced poverty on a regular basis. In 1941 Arondéus sent Tijssen back to Apeldoorn, most likely because he felt it was too dangerous now, with his increasing activities in the resistance" (nl.wikipedia.org). "His love for Jurie, a fisherman from the island of Urk, and later for Gerrit Jan, the young nurseryman from the Veluwe area, were cause for these sometimes bitter self-dissections. 'Do I have love, true love for someone ... or is it all a sham, nothing but temporary emotion?", he asked himself. In 2001, twenty of his homo-erotic poems: Detached Strophes, inspired by the work of the poet Boutens and written in 1922 on the island of Urk, were published posthumously. In his diaries he wrote extensively about his artistic struggles, but also about his worries about 'money and lust'. Also his social disillusionment became more evident, both in his correspondence and in his literary work: 'Yes, this philistine world is rotten, a garbage bin, a loo' (p. 26, 95)" (www intermale.nl).
“When in 1941 Arondéus’ book about monumental painting in Holland was published, he found himself heavily involved in the resistance. Together with Willem Sandberg and Gerrit van der Veen he falsified identity cards and wrote the Brandaris Letters. In these letters he identified cases of cultural collaboration and called for resistance when the occupier would threaten the arts like they did with the foundation of the Chamber of Culture (Kulturkammer). In 1942 his Brandaris Letter was combined with the artist resistance magazine The Free Artist founded by musician Jan van Gilse." With Gerrit van der Veen he lead the attack on the Amsterdam Population Registry in 1943. He was arrested and together with a some friends from the resistance sentenced to death after a show trial. For a description of the attack on the registry and the role of Arondéus see the story by Martinus Nijhoff and the text on Sjoerd Bakker. The poet Martinus Nijhoff, ex-officer of the engineers, provided instructions were the explosives should be placed and described the attack in 1945.
Grave of Arondéus at the Honourary Cemetery Bloemendaal (Source: www.ogs.nl)
From his death cell in the prison at the Kleine Gartmanplantsoen in Amsterdam Arondéus wrote his last letter to a friend: "There's only wonder because it's so easy to depart in love from life, so happy to commemorate what you leave behind, without bitterness" (www.inghist.nl). At his execution Arondéus is said to have shouted: "Let is be known gays are no cowards" (nl.wikipedia.org).
“Shortly after the war the participants in the attack on the registry were decorated, some posthumously, for their resistance work during the war, some of them with the Military Order of William. It was not until 1984 that Arondéus was granted the Verzetsherdenkingskruis (commemorative resistance cross) for his work in the resistance. Generally his homosexuality is assumed to be the reason why this decoration was so long in coming" (nl.wikipedia.org).
10 May 1904~ 26 April 1995
Frieda Belinfante was born on 10 May 1904 in Amsterdam. She was the daughter of pianist Ari Belinfante and 'just a girl', a half-Jewish girl. She was openly lesbian and at the age of sixteen fell in love with the componiser Henriëtte Bosmans. They lived together for seven years, also when Bosmans temporarily had gentleman-friends.
Belinfante herself, a cellist, was married for some years to the flutist Jo Veldkamp, who was also a conductor but did not excel at it. But Frieda was good at conducting. After studying with Herrmann Scherchen she won a conductor's contest with the Orchestre de la Suisse-Romande in Montreux. IN 1937 she performed in the Concertgebouw with the student orchestra J. Pzn. Sweelinck and the women's orchestra Aedon; thaat was when she was discovered by the outside world. During the late thirties she started Het Klein Orkest (The Small Orchestra) in Amsterdam, a chamber orchestra which had two succesfull seasons.
Belinfante refused to become a member of the nazi 'Kulturkammer' and therefore disbanded her orchestra at the beginning of the war. She joined the artist resistance. If necessary she dressed up as a man. With Willem Sandberg, Gerrit van der Veen and Willem Arondéus she planned the attack at the Amsterdam Population Registry (27 March 1943 – see Sjoerd Bakker).
In a film from the exposition 'Who can I still trust' by Klaus Müller, Frieda recalls Sandberg asked her one day for money. Because she knew quite a few wealthy people, she turned to Heineken (owner of the well-known beer brewery). He told her that he could not help, because the monetary flow was completely controlled by the Germans.
Frieda then offered her valuable cello to Heineken; she could not use it anyway at the time. Heineken thought that this was a marvellous plan and so they concluded the sale an circumvented the supervision.
After the attack Frieda had an adventurous escape to Montreux in Switzerland, where she found herself among 160 other Dutch Jews. There, especially, she felt like an outcast and the subject of gossip.
Back in Holland where the reception, as in many cases, was 'quite cool’, she decided to emigrate to the United States in 1947. She worked in Hollywood in one of the big studio-orchestras and with a group of Hollywood musicians she formed a professional symphony orchestra in Orange County.
She was the first woman in the world to become the permanent conductor of a professional orchestra. But the Los Angeles Philharmonic Orchestra did not tolerate any competition. And to make matters worse, she compromised herself because of her personal lifestyle. At a later age she gave music lessons to hundreds of children.
Frieda died on 26 April 1995, in Santa Fe, New Mexico.
In 1998 and 2004 the Dutch National Broadcast showed the film But I was only a girl by director Toni Boumans. In this film Frieda tells the story of her life. Her elder sister Renee, ex-students and friends supplement her story. The scarce recordings that exist of Frieda as a cellist, and as a conducter of her Orange County Philharmonic Orchestra can be heard in the film.
Is the only gay person who has been officially recognised until now (2010) as being a war victim. He lived in Groningen and was 16 when he was arrested, as a result of the tightened gay regulations of the occupiers (see introduction). Dutch judges sentenced him to a reformatary school.
After his release he never managed to get a permanent job. Only at the end of his life did he understand that this was because his war sentence was included in his post-war criminal record. Tiemon became a marine in the Netherlands-Indies and there also experienced gay adventures. He took part in the Groningse gay subculture in Groningen of the fifties and sixties. Under the name Paul Monty he wrote gay rag novels, and published two editions of the gay magazine De Nichten (The Sissies).
Willem Arondéus Lectures
In December 2004 the County Gouvernment of Noord-Holland decided to organize an annual theme-based lecture followed by discussion in honour of the artist and resistance fighter Willem Johan Cornelis Arondéus. With the lecture and discussion civilians en politicians are offered a stage to freely exchange ideas about current social themes that are relevant to the province.
Rudi van Dantzig was the first to give a lecture on 25 April 2005 in the County hall in Haarlem, on the topic: 'Can you be who you want to be or has this freedom become awkward?'
In 2006 bishop Philippe Bär was the lecturer on the theme 'Freedom'.
The third Arondéus Lecture was held on April 24, 2007 in the stately conference room of the Teylers Museum in Haarlem. It was presented by Gerard Spong, a prominent Surinamese-Dutch gay lawyer.
On 22 April 2008, in the same Teylers Museum, writer Désanne van Brederode had the honour of giving the fourth Willem Arondéus lecture. Her lecture centred on the question what a modern public moral would look like.
Philosopher Ad Verbrugge discussed the question 'What is freedom' in the 5th lecture in 2009. The lecture was held in the shining renovated County Hall.
Picture: Pim Ligtvoet
Picture: Serge Ligtenberg
Marjolijn Februari talks to the audience
Picture: Pim Ligtvoet
On 27 April 2010 philosopher, publicist and lesbian Marjolijn Februari presented the 6th Willem Arondéus lecture. It was remarkable she said, that the last wish of the resistance man who was sentenced to death, was to ask for a cream cake. The Roland Holst family from Laren fulfilled this wish and 12 peaces were distributed among fellow prisoners. ‘Homosexuals are more frivolous than ordinary people’, was the approving comment by mrs. Februari. But she also made a link to another biographic theme.
Arondéus was an artist who, just like many others in this time, propagated a public ideal. Why do we have so litte faith in the goodness of human kind, in the admirable ability to co-operate and to live together? And why is there so much distrust towards citizens by the authorities, by bureaucracy?
The ability of parents, teachers, carers, to think for themselves, to make the right decisions by themselves, yes, and even to call into existence an up-to-date government: mrs. Februari thinks this ability is huge. The citizens founded the state, schools, hospitals; so they can reform them too. If the government is able to radiate this, her citizens will change too. She gave an deterring example of the image of an English tourist, who stranded in Portugal because of the eruption of the Icelandic vulcano, and shouted to the tv-camera: ‘Thank you, Mr. Brown!’ Politicians should reply with: ‘Just save yourself'.
The name of tailor Sjoerd Bakker
(Leeuwarden, 10 June 1915)
Is tenth on the plaque for the twelve men who were executed on the first of July 1943 following the attack on the Population Registry in Amsterdam (see below). After the war all were buried at the Honorary Cemetery in Bloemendaal.
This splendid picture and the following personal details were taken from the website of this cemetery. Sjoerd Bakker was a tailor, cutter and designer. He worked where he lived: at the Vondelstraat 24 in Amsterdam. From 1942, when forced labour, raids and deportations started, he helped Jewish and other people in hiding.
Bakker provides forged or stolen stamps for food and identity cards. This way people in hiding could manage to get food and were more ore less safe to move around. He also helped Jewish people in hiding to illegally house their movables. Initially he worked on his own. Later on he came into contact with the Persoonsbewijzencentrale (Identity Card Registry) and Gerrit-Jan van der Veen, through Willem Arondeus who was a friend of Sjoerd. In February and March 1943 Bakker made the police uniform coats which were necessary for the planned attack on the Amsterdam Population Registry. Two for the officers: 'captain of the State Police Arondeus, 'lieutenant' Van der Veen, and four for the 'constables' Rudolf Bloemgarten, Karl Gröger, Coos Hartogh and Sam van Musschenbroek. He received the necessary materials from interior designer Einar Berkovich - an acyuaintance of Van der Veen - through relations at the Hollandia off-the-pegg factury in Kattenburg.
Willem Arondéus, the first name on the plaque, was in charge of the attack, together with sculptor Gerrit van der Veen. Both were active in the Identity Card Registry. At that time the registry was housed in the former concert hall of Artis (Amsterdam Zoo) at the Plantage Kerklaan. By destroying the files the artists resistance, wanted to make it impossible to keep track of the false identity cards of Jews and other people who went underground.
Two of the executed, medical student Rudolf Bloemgarten (nr. 2), and the store clerk Halberstadt (nr. 7), were Jewish themselves and helped people in hiding. Lesbian conductor Frieda Belinfante was a Jewish member of the group as well. The architect Koen Limperg (nr. 9) made floor plans of the building. Catholic Hispanic historian dr. Johan Brouwer (nr. 8) provided Arondéus, who was impersonating a police captain, with a gun. Policeman Cornelis Roos (nr. 12) may, just like Sjoerd Bakker, have helped to get hold of the necessary police-uniforms. Poet Martinus Nijhoff, ex officer of the engineers, pointed out where the explosives should be placed. He escaped, just like museum conservator Willem Sandberg and Frieda Belinfante. Gerrit van der Veen also escaped but he was arrested a year later and shot.
Plaque at the Plantage Kerklaan (Source: www.jhm.nl)
On March 27, 1943 the six resistance men disguised as policemen and three as civilians, forced their way into the registry and tried to set the offices on fire with explosives. The identity cards may have caught fire, but because of sympathetic firemen water damage was also considerable. Unfortunately it did not benefit the Jews who had already been put on transport. Indescretion by the participants and betrayel led to a number of arrests. Within three weeks the Sicherheitsdienst was able to round up most of the culprits and their helpers. The SS and Police Court passed the death sentences on 18 June 1943 in the Colonial Institute (now Royal Institute for the Tropics), for the 12 men named on the plaque, which were carried out on 1 July 1943 in the dunes near Overveen. The memorial stone in front of the former concert hall, Plantage Kerklaan 36, was designed by Willem Sandberg. On the grave of Sjoerd Bakker there is the following text: "but the greatest of these is love" (New Testament, 1st letter to the Corinthians, 13)
Grave Sjoerd Bakker (Source: www.ogs.nl)
www.ogs.nl (picture grave)
www.jhm.nl/amsterdam (picture plaque)
The gay identity of Sjoerd Bakker has been described by Pieter Koenders - see: Work Plan Investigation drs. Marian van der Klein in Gays in the collective memory of the Second World War: fifty years of conceptualizing of conceptualizing on homosexual war experiences. Dec. 2004 - in Dutch (see www.iisg.nl/research).
Willem August Theodorus Niemeijer
The factury of the Niemeijer family business (Source: www.ovmgeducatief.nl)
(Groningen 8 March 1907),
Was the eldest son of the well-known tobacco manufacturer Theodorus Niemeijer and gay, and was working for the resistance in Groningen. In the same city Tiemon Hofman was arrested and sentenced because of his homosexuality. Willem Niemeijer died on 16 February in concentratition camp Neuengamme near Hamburg. His body was laid to rest at the Dutch Honourary Cemetery in Hamburg (W.A.Th. Niemeyer).
"Between 1941 and 1945 over 5,500 Dutch men and women were transported to the German concentration camp Neuengamme for various reasons. The majority were in the resistance (like the poet Jan Campert), but there were also hostages, people who were arrested in retaliation for actions by the resistance: Jews, Jehova’s Witnesses and black marketers. When the war situation became more precarious for the nazi-regime, circumstances for the prisoners became worse. Hardly any food or water was available and the terror by the camp guards increased. Eventually only about ten percent of the prisoners returned to Holland in 1945."
Grave Niemeijer in Hamburg (Source: www.ogs.nl)
http://www.vriendenkringneuengamme.nl/boek_ned.htm (text between quotation marks about ‘Dutch prisoners in Neuengamme. The experiences of over 5,500 Dutch in a German concentration camp, 1940-1945’. Final editing by dr. Judith Schuyf. Zaltbommel 2005)
www.ogs.nl (picture grave)
www.ovmgeducatief.nl (picture factory)
Henrica Maria Paré and Theodora Versteegh
'Ru' Paré, born in Druten (1896-1972) and 'Do' Versteegh, born in Kerk Avezaath (1889-1970), got to know each other when Ru moved to The Hague in 1919. In The Hague she registered at the Royal Academy for Visual Arts, where she met the painter Jan Toorop. Theodora Versteegh studied singing with Cornélie van Zanten and Tilly Koenen and had already started her career as an alto. Do and Ru had a lesbian relationship.
During the war both refused join the 'Kulturkammer'. With Ru's resistance group the couple saved over fifty Jewish children and also a number of adults. Ru Paré, nicknamed 'Aunt Zus', coordinated the resistance work, wich consisted mainly of finding foster families, contacts and providing falsified identity cards. With her concerts Theodora Versteegh provided the necessary money.
The resistance group of aunt Zus searched for hiding places throughout the country. During dangerous situations priests and vicars often played an important role in finding new addresses. One of them was the Frisian vicar Sipkema. Aunt Zus also took care of changing Jewish identification cards into normal looking documents. The visual artis Chris Lebeau removed the stamped 'J' from the card. He was arrested at the end of the war and died in concentration camp Dachau (2 April 1945).
Ru Paré always kept in contact with the children she saved. A number of them moved to Israel. One of the children was Hanneke Gelderblom-Lankhout, who managed to get a street in The Hague named after Ru Paré.
Do Versteegh (Source: www.dutchdivas.net/frames/alten.html)
Theodora made her debut in 1914 in the oratorio Joshua by Händel. She sang the alto solos in the Matthäus Passion about 250 times. She sang duets with Jo Vincent and together with Jo, Evert Miedema (later Louis van Tulder) and Willem Ravelli in the Jo Vincent Quartet. She also sang in Belgium, France and Germany. During the thirties Do Versteegh began teaching in addition to her solo career. She performed until 1948.
The archives of Theodora and Ru are kept at the Dutch Music Archives. In the town of Pijnacker there is a Theodora Versteegh street and a Ru Paré boulevard. In The Hague is a Ru Paréstreet. The old Marius Bauer School in Amsterdam-Slotervaart (merged with the nursery school De Grutto) is now the Ru Paré School (1988). De school is situated at the Chris Lebeau Boulevard, named after a member of Ru's resistance group. The painter Hugo Kaagman made two mural paintings on the building. De school is active in its disctrict as a 'Brede School' (community-integrated school) and has an informative website.
Mural painting of Hugo Kaagman on the Ru Paré School (Source: www.galeries.nl/mnexpo.asp?exponr=25143)
www.enter-amsterdam.nl/Public_html%20rupare/2005-2006/index4.htm (Picture Ru)
www.dutchdivas.net/frames/alten.html (Picture Do)
Karel August Pekelharing
6 August 1909~10 June 1944
Karel Pekelharing (Source: www.eerebegraafplaatsbloemendaal.eu)
Dancer and poet Karel Pekelharing (Hoorn, 6 August 1909 - the date of 6 April on his grave is wrong) was a member of the Artists Resistance. The best known person from this group is the sculptor Gerrit van der Veen. Together Willem Arondéus he led the attack on the so called Identity Cards Registry IPBC).
Other members of the artists resistance and also gay, were Frieda Belinfante and Sjoerd Bakker. Karel was dancing with the Nederlandsche Ballet; he was a choreographer as well. Because he was a well known anti-fascist and communist he went into hiding for a while in Kassel, Germany. Late 1942 her returned and lived underground in The Hague and Amsterdam.
In the attack on the prison at the Weteringschans, which was conducted on New Year's Eve 1943-1944, the group worked together with the group of Jan Bonekamp and Ko Brasser (Council of Resistance, RVV). In a report of the failed attack Karel Pekelharing is mentioned twice (see below). Just as in the attack on the Amsterdam Population Registry, police uniforms were used. Before a second attempt could be made Karel was arrested in the Amsterdam American Hotel (6 April).
Hundred metres from there, precisely in the Weteringschans prison, Pekelharing was locked up. On 10 June 1944 he was executed in the dunes of Bloemendaal. His grave is located at the Honorary Cemetery ‘Bloemendaal’ in Overveen.
Grave Karel Pekelharing (Source: www.ogs.nl)
The Attack on the Weteringschans
"An enormous amount of work went into it, but the attacks on the Weteringschans all failed. We didn't succeed and neither did any of the others. The first time we tried it was on New Year's Eve 1943/1944. With a couple of guys we were in a house, if I remember correctly in the Krayenhoffstraat. Gerrit van der Veen was there, the famous sculptor and resistance leader. That night there were more people present from the artists resistance, including sculptor Johan Limpers and Karel Schippers, the artist who was later shot in Delft. Karel Pekelharing was also there, an actor, and Ferry van den Ham.
There were German uniforms for five men. The had all the equipment and distinctions to make us look like German policemen. The uniforms were provided by Alie van Berkum, who worked at the shipping department of S. Krom textile cleaning in Alkmaar. They were stolen and brought to a safe address. Alie Hollander and someone else brought these uniforms by train to Amsterdam. This in itself was very dangerous. Because if they had been arrested is some kind of check-up, the consequenses would have been disastrous. So everything was well organized.
There was a car which drove for the Wehrmacht. Two guys who drove the car promised to cooperate. The came from somewhere near Amersfoort and were on duty the 31st of December. The plan was that the five of us would pose as German policemen bringing in five prisoners at the Weteringschans. With us was a real German, Albert was the name he used. He was later shot as well. But of course he spoke German and he would do the talking for us. The plan was to drive through the gate to get to the inner courtyard. Of course that gate had to be opened for us by our German-speaking Albert. Then we would unload our so-called prisoners and take aim at the SS who were on duty that night. And then the plan was to free the real prisoners that we came for. I thought about the Zaandam guys, also in the resistance, Ab Huisman, Sjef Zwolfs, who delivered that trotyl from the Hem Bridge, and others.
It just so happened we spent the night in a house whose tenants were celebrating New Year's Eve somewhere else. They left the house to us. Old houses with wooden stairs, I think it was the second floor. During the night, Gerrit van der Veen went to the car. It stood in a little shed which was lent to us; I don't remember by whom, I was not involved in that. But the doors [from the shed] didn't close very well. That car was too big. Early in the morning the two guys from Amersfoort had already found out the battery was empty. A new one had to be provided, otherwise starting the engine would have been impossible. How that came about I don't know. With the help from someone in our group, they managed to get a new battery. So, during the night some work needed to be done in that little shed. Apparently light leaked through the doors and in one way or another drew the attention of the Germans.
Just before the end of curfew whe would attack. Several hours earlier Gerrit van der Veen went to he shed. He took a boy with him (after the war we heard his name was Jansma). The word was: put your uniforms on because we'll drive up in a minute. I was a kind of Feldwebel, some officer; we were ready and waiting. And all of a sudden there was a thumping and thundering on those stairs. That sounded twice as loud in the silence of the night. That Jansma comes running up, out of breath and upset: the SD [Sicherheitsdienst] is at the car! Gerrit van der Veen managed to get on top of the shed and saw the krauts taking those guys. The car stayed there. So, away with the uniforms! Away with everything and get out of there. It was all so well prepared. But it's still a big question how the SD got there...
Well, then then nothing much happened for a while but meanwhile we did some small stuff. It wasn't like us being on a holiday though, but I mean, it took some time before we went back to the Weteringschans. Again it would be under the leadership of Gerrit van der Veen. We met in a place in the Sarphatistraat. But because Sarphati was a Jew, that street was renamed as Muiderschans. A number close to 100. It was a very decent house that was provided to us. Jan Bonekamp was with us again. A few guys from Alkmaar, Johan Asjes and Joop Jongh, Meindert van der Horst. And let me mention this also: not long before we got together, Karel Pekelharing was arrested, Paul Guermonprez was arrested."
Johan Aaldrik Stijkel
8 October 1911~June 1943
Han Stijkel (Source: oranjehotel.nationaalarchief.nl)
“One of the first to start a resistance group was Han Stijkel, born in Rotterdam op 8 October 1911. He studied English at the University of Amsterdam. During his years as a student he was already involved in the fight against fascism; from Portugal he took part in actions of the resistance against Franco during the Spanish Civil War (1936-1939)." (Oranjehotel). He was an acquaintance of esquire Schorer, founder of the Dutch Scientific Humanitarian Committee (NWHK), which stood up for the emancipation of gays.(Drs. Pieter Koenders, phonecall October 2006).
“Han Stijkel had contacts in leading circles in The Hague. Therefore he managed to involve some influential persons in his group. This group most likely never consisted of more than 80 people, surely far less than the 100 to 150 Stijkel mentioned himself. One of them was general-major S. Hasselman (1880) who took on the military part of the work.
The group also consisted of police officers, students, military officers and merchants” – from Catholic, Jewish, socialist, and other backgrounds. As can be seen from an impressive letter, Han Stijkel was clearly inspired by Christian philosophy.
“There was an active core group in the Zaanstreek”, especially in Koog aan de Zaan. They were people from the socialist youth movement (AJC), including the director of the Honig food company, the couple Ero-Chambon from the dance hall De Waakzaamheid (The Vigilance), the owner of Zwart's Automotives and others. Modelling themselves after the Ordedienst (underground militia) groups they mainly focused on collecting military information.
“Stijkel had been given the order by the government to unite the resistance groups that were spread throughout the country. Therefore he and the members of his group travelled across the country. Stijkel used the alias dr. Eerland de Vries.
During these trips espionage information was collected as well, which was passed on to the Organisation-Westerveld. ... At the start of the war this work was extremely difficult because supporting organisations (such as the Identity Cards Center, which made false identity cards – see Arondéus and Bakker) did not exist yet and hiding places were scarce.
Also real beginners' mistakes were made; ... Certain information about the group, like member lists and weapons lists, were handled very carelessly. As well, the power and the years of experience of German contra-espionage were underestimated. The extremely dangerous spies Van der Waals and Ridderhof (so-called V-Männer) managed to infiltrate in the Stijkel group”.
“Han Stijkel wanted to get new instructions for his activities directly from the gouvernment in London; also he wanted to bring a lot of espionage material to England. Through a police organisation Stijkel came in contact with the brothers Willem (1896) and Arie (1899) van der Plas, fishermen from Katwijk, who in their fishing boat KW 133 would take Stijkel and his right hand men Gude (1916) and Baud (1919), both fellow students, to 'a certain location in the North Sea', where they would be picked up by an English or Dutch submarine. Also a 'rich Jew' would come along, who promised financial support to the organisation in return for the fare to England. The latter however, turned out to be a fish merchant from Scheveningen who worked for the Sicherheitspolizei (Sipo). Furthermore there were also traitors within the police organisation that mediated the chartering of the KW 133.
The fishing boat KW 133, which was later renamed UK 65 (Federation of Fishing Unions)
During departure from the harbour of Scheveningen on the second of April 1941 everything went wrong. The exit from the harbour was blocked. Though Stijkel, Gude and Baud managed to jump overboard, they were arrested instantly. After several days of interrogation they were transported to the 'Oranjehotel' (prison in Scheveningen). Soon the arrest of another 15 men followed and this number rose to 47, among 4 women.” As a result of indescretions by a former member of the group, the majority of the Koog group was rounded up and detained in Scheveningen as well. “In the Oranjehotel Stijkel remained the leader of the group.”
Emprisonment and execution
”After more than a year the complete Stijkel group was transported to Berlin on 26 March 1942. Here, in September 1942 a trial was held before the Reichskriegsgericht, the highest military court in Germany. This was quite an exception for nearly all arrested Dutch resistance fighters were sentenced in Holland. The trial was held in secret. The members of the Stijkel group were treated as so-called 'Nacht und Nebel' prisoners. ... The outcome of the trial and their execution remained a secret for a long time.
Grave Han Stijkel (Source: www.ogs.nl)
On 26 September 1942 39 death sentences were pronounced. Six members of the group received clemency and were sent to a correctional facility, one died in prison. Despite huge efforts by the Dutch gouvernment in Londen, who asked neutral Sweden to mediate, and by the consul in Berlin, the sentences of ... 32 remaining members of the Stijkel group were upheld.
Then the convicted lived for eight months between hope and fear, but on the fourth of June 1943 the sentence was carried out on a firing range in Berlin-Tegel. The 32 members of the Stijkel group were shot at 5 minute intervals, Han Stijkel first. Both in prison and during the execution Stijkel and his group received a lot of support from the prison vicar Harald Poelchau, who was very impressed by the attitude of the Dutch.”
In his farewell letter to his father, ‘Pipa’, Han writes about his transition to another, eternal life: "When you get this letter, I've crossed over from this well known but still so mysterious life to the big unknown life ...
Stripped from this material body, which I always felt to be an impedement, I'm where God wants me to be". "Far above this earthly existance, above 'birth' and above 'death' lies the aware of that 'spark of God' in me." It is from this awareness of eternity that he also rejected feelings of hatred and revenge. "I did what I felt was my duty.
The Germans think that too". He considers the power of the Germans, just like Jesus before the judge's seat of Pilate, as deriving from God. "God always does what is best for us, even if we can't see it yet’.
After the war
“For a long time there was uncertainty about the fate of the Stijkel group. ... it turned out that the 32 executed members of the group were buried in a cemetery in Berlijn-Döberitz, a part of the city which at the end of the war was in the Russian section of Berlin. ...
In June 1947, with help from the French Occupation Authorities the bodies could finally be transported to the French section of Berlin and from there to Holland. Attended by a large crowd (the fate of the Stijkel group hit Holland hard) the 32 members of the group were buried at the cemetery of Westduin (Ockenburg) in The Hague.
Memorial Service for the Stijkel group
The funeral was preceded by a service in the Great Church of The Hague. Numerous officials were present, among them a representative of Queen Wilhelmina, who showed great interest in the Stijkel group. Following the service, al kilometre-long procession through the streets of The Hague to the cemetery. There 32 simple wooden crosses and a monument that was erected later on, remind us of Han Stijkel and his resistance group”.
After the war many citie streets were named after resistance fighters who were killed. In several cities in Holland the memory of Han Stijkel and his resistance group lives on. There is, for example, a Han Stijkelstraat in the Northeast Polder. On Highway A6,” near Urk, “a petrol station also carries the name 'Han Stijkel’.” A secondary school in The Hague was named after him as well. It later became a part of the Dalton School Community. The same city knows a Han Stijkel Square.
More information about the other members of the Stijkelgroup is to be found on the site of the Foundation Honorary Grave Stijkelgroep: www.stijkelgroep.nl.
Han Stijkelstraat in the Northeast Polder
All quoted texts and pictures (exept for one) are by http://oranjehotel.nationaalarchief.nl/gevangenen/onderzoeksvoorbeelden/stijkel.asp
The picture of the grave: www.ogs.nl.
Also see www.joodsmonument.nl and www.stijkelgroep.nl
Pieter Koenders wrote about the gay identity of Han Stijkel, see: Work Plan Investigation drs. Marian van der Klein in Gays in the collective memory of the Second World War: fifty years of conceptualizing of conceptualizing on homosexual war experiences. Dec. 2004 - in Dutch (see www.iisg.nl/research).
The information about the Zaan: J.J. ’t Hoen en J.C. Witte, Zet en Tegenzet (s.y. ca.
Frans Toethuis (©Collectie Jan Carel Warffemius)
On this picture of three gay friends on the beach in Zandvoort, Frans Toethuis is the one in the middle, with a rolled up drawing or poster between his knees.
The man to the left of him is Hein Jorissen, the name of the young man on his right, without jacket, is unknown. The picture is from an estate which was kept hidden for a long time and decorates the book by Klaus Müller about the persecution of gays during the Second World War (see below). Frans had a Jewish friend whose name is unknown and who was arrested and killed in the Holocaust. Frans Toethuis lived from 1910-1989. At the time he worked for the fashion-house 'New England'.
www.vrolijk.nu about: Klaus Müller (redactie) - Beaten to Death, Ignored to Death - Persecution of gays by the nazi regime 1933 – 1945’
Klaus Müller and Judith Schuyf (ed.) It starts by saying no: Biographies about resistance and gays 1940-1945, with portraits of Dutch gays and lesbians who were active as members of the resistance.
In the review of the first book, www.schorer.nl refers to a pdf-file with fragments and pictures.
Early gay movement in the Netherlands
Sugar bag from Atlantic (Source: www.suikerzak.nl)
“On January 14, 1940 in the Amsterdam hotel-restaurant Atlanta [Atlantic] at the Frederiksplein a meeting took place to start a monthly magazine for gays. Initially the name Ons leven (Our Life) was suggested, but eventually the more militant nameLevensrecht (Right to live) was chosen.
The three founders were Jaap van Leeuwen, Niek Engelschman and Han Diekmann.” Engelschman was Jewish. Diekmann and he might have chosen this hotel deliberately because Atlantic had a large Jewish clientele and was not too far from Diekmann's house. During the war the hotel's owner, Jacob Sweering, would help many Jews, adults as well as children, go into hiding.
The 'Preface' of the magazine formulates an important principle: "Averse of any religious or political stand, LEVENSRECHT takes on the common humanitarian principle as a guideline." It therefore demands ‘Lebensraum’ for everyone. In the second edition the editors, in response to a reader's letter, phrase a similar principle: "And therefore it is so beautiful, that in principle there is the possibility of love between all humans, irrespective of race or nationality, age or class, religion or gender." The magazine's orientation was against every racial doctrine and every theory of superiority.
Clipping from Levensrecht (Source: www.ihlia.nl)
“The fourth [actually the third] issue of Levensrecht was printed but not yet distributed, when the Germans invaded the Netherlands on 10 May 1940. The writer Jef Last warned the editors that the Germans might make use of the address file. Therefore they destroyed what had to do with Levensrecht and Van Leeuwen, with his fabulous memory, learned the names and addresses of the 190 subscribers by heart.”
“Immediately after the war the anti-gay regulation 81/40 was lifted. Thanks to the factLevensrecht had disbanded during the war and did not collaborate with the Germans, the required permit could be obtained and on 4 September 1946 the first post-war issue, which closely resembled the first three, was published. The continuity in form and content is remarkable.
“On Saturday 7 December 1946 the first meeting of the readers of Levensrecht was held in De la Paix in the Amsterdam with a lecture by Last about love in Greece. There was a huge audience. On Sunday 8 December, a program with songs, dance and recitation followed in Hotel Krasnapolsky on the Dam square in Amsterdam. The place was rented under the pseudonym of 'Shakespeare Club'. This alias had been chosen for good reason because when Krasnapolsky figured out the nature of this club, the management made it clear they were not welcome there anymore. So there was also a pre-war continuity in being condemned by society. Many members of the new club used pseudonymes, the chairman Engelschman (Bob Angelo) included. The pseudonym 'Shakespeare Club' was to be replaced by the equally vague Culture and Relaxation Centre (COC).
“Ever since its establishment in 1946, the COC, in its persuit of equal rights, put great effort into drawing the public's attention to the persecution of gays, before and after the war. Both the first COC chairman, Niek Engelschman (from 1946 until 1962), and the second, Benno Premsela (from 1962 until 1971), had been in hiding during the war, regularly drew attention to the subject.”
Han Diekmann (1896-1989) is the least known of the three founders of Levensrecht, but he was in fact the publisher. He was the treasure of Levensrecht, the only member of the editorial board to use his own name. Diekmann, in that period the partner of the much younger Engelschman, could do this because of his financial independency.
Account of the guarantee fund, with Diekmann as the only one
with his name, addres and bankaccount mentioned (Source: www.ihlia.nl)
Who was Han Diekmann? On 19 January 2008 historian Hans Warmerdam gave a lecture in Haarlem about Diekmann's life. This happened during the New Year's reception of the COC-Kennemerland and was connected with the campaign to open a Han Diekmann House, a house for Christian gays to get temporary shelter. Warmerdam gave permission to Bevrijding Intercultureel to summarize this lecture on our website.
Johann Heinrich ('Han') Diekmann was born on 29 July 1896 in Amsterdam. His father was German. In 1903, unable to manage on her own, his mother brought him to the Salvation Army orphanage. There he attended primary school, got an education, but chose to become a Salvation Army soldier. When, at the age of 27 in 1923, he fell in love with a sixteen year old boy, he resigned from his work for the Salvation Army. His relationship with the minor continued, but was against the law (art. 248 bis). Diekmann's landlady reported him to the police and he was questioned by the Amsterdam vice squad, without further consequences. But the boy was sent by his parents to a reformatory school. Han Diekmann struggled with his homosexual feelings. He learned to accept them and remained a faithful Christian. A new relationship with an under-age boy had a bad ending however. The friend got him into financial troubles and again he was reported for art. 248 bis. Diekmann was sentenced to three months in prison and placed under government supervision, in the psychopath ward of a psychiatric institution in Leiden (1928). When after some time he declared that he did not want to be gay anymore but to be a 'normal human', his doctors assumed he was cured. In 1930 Han Diekmann was set free. He immediately left the country and went to Belgium, where he became a succesful business man. For the rest of his life he kept silent about his imprisonment and detention in the psychiatric institution.
Han Diekmann (coll. Hans Warmerdam)
In 1938 the threat of war brings Diekmann back to the safety of the Netherlands. He settled in Amsterdam, at the Regulierdwarsstraat. From April 1939 he rented a house in the Noorderstraat 62. Through a young friend Han Diemann meets Nico Engelschman in August 1939. Engelschman (25) was much younger and very socially inspired. The 25-year old Engelschman and the much older Diekmann developed a relationship. It was said to be nothing more than platonic, but Diekmann remained in love with Engelschman for the rest of his life. Their brief relationship was of great importance to the Dutch gay movement. Later on Diekmann would say the COC was born out of it. Indeed, without his money, his house and his stencil machine, the founding and publication of Levensrecht, the brain child of Engelschman and Jaap van Leeuwen, would have been inconceivable. But there was more. The call for the start of the publication also was sent to common acquintences of Nico and Han. In the first edition, on the first of March 1940, only Diekmann was mentioned in the header, with his own name and bank account number. Nico is listed as 'Bob Angelo, editor'. This is was also the case in the next two editions, so Diekmann can be considered the publisher of Levensrecht. Meanwhile the vice squad was keeping an eye on him, as well as on Nico. Their personal data and a copy ofLevensrecht were sent to the attorney general. In the second week of March Engelschman had to appear at the police station. The vice squad asked for advice from the Ministry of Justice. The official replied: "This publication for gays carefully stays within the boudaries of the law. In my opinion intervention is not possible within the existing regulations." Shortly after the publication of the first edition the relationship between Diekmann and Engelschman ended. This led to discord within the circle of contributors and threatened the existance of Levensrecht.
The threat from outside was much greater. The German invasion on the 10th of May actually brought an end to the existance of the publication. It was Han Diekmann who declared on August 5, 1940 to the Amsterdam vice squad, that the publication of the magazine had ended. Almost all traces of the magazine were erased by then. On the day of the German invasion the editorial board put the third edition (which was in the process of being stencilled) and the remaining copies of former editions as well as the subscribers lists, in a laundry tub to a pulp and dumped it into the Reguliersgracht. The money was still administered by Diekmann. Not until after the war would there be contact again between Diekmann and the other contributors.
In October 1941 Han moved to Haarlem, where he lived at the address Ripperdastraat 15-c. From April 1942 on systematic actions were undertaken by the Germans to get not only the unemployed, but also Dutch labourers to do forced labour in Germany. In 1943 students and ex-militairy were forced to go. Many managed to avoid this measure. In August of that year, Generalbevollmächtigter Fritz Sauckel demanded 150,000 Dutch men. At first the maximum age was put at 45, but later on this was extended to 50. Han Diekman now decided to go underground, probably in Amsterdam. There, on 6 June 1944, when the allies landed in Normandy, he was caught in a raid. His red tie and his remark that he did not feel like working in Germany, incriminated him on suspicion of being a communist. After a short imprisonment in camp Amersfoort Han Diekmann was put to work in the Messerschmitt airplane factories in Stuttgart. He worked there as a supervisor and administrator and managed to survive. In June 1945 he was back in Haarlem.
When Nico Engelschman in 1946 started the magazine Levensrecht again (number4), the header mentioned only one name: J.L. van Dijk, with address and bank account number. Han Diekmann felt passed over, but eventually did become an active member of the circle of readers of the magazine Wetenschappelijk, Cultureel- & Ontspanningscentrum de Shakespeare Club.
(Scientific, Cultural & Leasure Centre the Shakespeare Club). When in 1949 this name was changed to Cultuur- en Ontspanningscentrum (COC), Diekmann started a division in Haarlem (January-September 1951). In 1956 he was rewarded honorary membership by the COC. Han Diekmann died in 1989 in Heemstede.
Lecture Hans Warmerdam on 19 January 2008 before the COC Kennemerland
http://www.ihlia.nl/documents/pdflib/Levensrecht/1940/Levensrecht-01.pdf (also both other issues from 1940 can be found here as pdf-file)
http://www.publiek.coc.nl/cocupdatepdf/Update2005-4.pdf (Rob Tielman Jan Carel Warffemius)
http://www.herdenkenenvieren.nl/hev/4.mei/organisaties.van.de.naoorlogse.generatie (about the involvement of the COC on May 4).
12 November 1913~1957
Nico Engelschman (©Collection Jan Carel Warffemius)
“Nico Engelschman was born on 12 November 1913 in Amsterdam as the oldest of five boys. His father was a travelling salesman, his mother a housewife. He was not raised with a specific political or religious outlook. His father was Jewish, his mother Lutherian – both non-practicing. Niek searched and found his own way..."
"Soon Engelschman joined the labour force. Poverty due to his fathers unemployment did not allow for a secundary education. Engelschman was offered a job as a junior assistant with an export company in the Netherlands Indies. He worked there until the Japanese occupied the country in 1942. During that period he also became a member of 'Mercury', the General Dutch Union of Trade and Office Workers.
He joined the youth movement of the Union. ... In 1932 Engelschman became a member of the Independant Socialist Party (OSP), a party which had split in 1932 from the SDAP. In 1935 the party merged with the Revolutionairy Socialist Party of Henk Sneevliet to the RSAP. It was a rather small party with only a few thousand members. Young people, espacially, were active within the RSAP. Engelschman became secretary of 'Revolutionary Socialist Youth', of which his brother Hennie also was a member.
During the winter of 1933 Nico had already been active in meeting fleeing German socialists at the Dutch-German border. In 1936 he wrote the one-act play Fascist terror about the bloody suppression by fascism. This pre-war resistance would continue after the occupation."
“Lectures were given on sexuality as well, following the example Wilhelm Reich.
Homosexuality was mentioned briefly and vaguely. In the youth movement I also met Last and Rot. They were asked to give lectures. It was only later that I understood that Last also had gay feelings, though I should have known because of his book Zuiderzee, which mentioned that aspect.'
“At the age of 24 he became completely aware of his gay nature. In 1938, through an advertisement in the Wierings' Weekly, the Amsterdam free local paper, he becaime acquainted in 1938 with an older gay man, an academic named Ellenberger. Through him he met Schorer (Jacob Anton Schorer esq. was a jurist who started in the early 20th century openly challenging homosexuality was punishable by law.
He died in 1957.) and whom he visited twice. During that period he also became involved in studying of all kinds of famous gay people in western history, including the ancient Greeks and Romans. He started to read Couperus. 'It became clear to me a lot of famous gays had been writers, something that so far I hadn't had a clue about. ... With all the admiration I had for Schorer and his contributors, I still felt it wasn't enough. More could and should be done.'"
“Engelschman decided to completely dedicate himself to this cause and stop his political activities. Sal Santen, also a RSAP member, described the way he was told about this by Sneevliet: 'Yesterday your friend Nico was with me. He wants to leave the party and the youth movement. ... It's not a political break.
But he also doesn't want to become a resting soldier. It's a very delicate situation. The young man is gay, he told me, and feels very lonely between all the others of a different nature, especially because he has to keep it a secret. He wants to start something for the rights of gays, whom he calls an oppressed minority, and maybe they are. I gave him my blessing.'” Sneevliet's children, the twin brothers Pim and Pam, were also secretly gay.
“Through Schorer, (Benno) Stokvis was given the name of Engelschman. Stokvis asked him if he wanted to write an autobiography. This resulted in autobiography III in Stokvis' book (The Gays. 35 autobiographies. Lochem, 1939). The fairly favourable reception of book was an incentive for Engelschman to put into practice his idea about what could and should be done. Together with Ellenberger and Diekmann, at that time his personal friend, he undertook action. Late in 1939 a newsletter appeared announcing the start of a magazine for gays.
The newsletter was signed by Bob Angelo, the alias Engelschman would use from now on and under which he became widely known.” On the first of March 1940 the first issue of Levensrecht, Monthly Magazine for Friendship and Freedom, was published. In May there were 190 subscribers, when the magazine was discontinued because of the occupation.
Identity Card (28 January 1943) of Nico Engelschman:
his profession ‘office assistant’ has been changed to ‘actor’
(©Collection Jan Carel Warffemius)
When the Germans occupied The Netherlands in 1940, Engelschman joined the resistance. "During the war I was active in the resistance, but I felt it wasn't that big a deal. One of my brothers and I helped Jewish friends. They're still alive. Jef Last and Tom Rot, who also were in the resistance, came to my house once a week for a meeting. This was from 1943 on, when I started living at the Keizersgracht. I was in hiding, sometimes at my mothers place, sometimes in other houses."
“Engelschman presumably was involved in the resistance group of the RSAP. The RSAP was also connected to the Vonk group where Last and Rot were members.”
”During the thirties Engelschman became a member of an amateur theatre group, with no specific political of religious profile. In 1942 he tried to register for the theatre school. He was not admitted because he had two Jewish grandparents but he was allowed to become a student teacher for half a year. After that he took classes from actors who were well-known at that time. After the war he joined the theatre group 'The fifth of May', which consisted of people who all refused to join the Kulturkammer during the war. From that moment on he became a professional actor."
In 1946 Engelschman started Levensrecht again, the precursor to the COC. One of the awards he would receive posthumously was the bridge named after him (1998) near the Amsterdam Gay Monument. In Leiden (1990), Nijmegen (1991) and Groningen already streets had already been named after him, and in The Hague a park.
The bridge named after Engelschman in Amsterdam (Source: members.chello.nl)
users.cybercity.dk/ ~dko12530/53.htm (drs. J.N. Warmerdam & drs. P. Koenders 1987, conversation with Nico Engelschman)
http://www.offensief.demon.nl/krant/offensief-165.pdf (Ron Blom)
www.publiek.coc.nl/cocupdatepdf/Update2006-1.pdf (Hans Warmerdam, picture 1938)
www.publiek.coc.nl/cocupdatepdf/Update2005-4.pdf (Picture PB 1943)
members.chello.nl/mennevellinga/homomonument.html (Picture bridge)
www.gaypnt.demon.nl/straatnamen/E.html (streetnames Nijmegen)
Homo Encyclopdia of the Netherlands (2005)
Jaap van Leeuwen
Jaap van Leeuwen
“Jaap van Leeuwen (1892-1978), born in Leiderdorp, was actief within the Dutch Scientific Humanitairain Comittee (NWHK), which was founded by esquire Schorer in 1919 and which disbanded when the nazi’s invaded”. This also applied to the magazineLevensrecht, which had just released three issues. Van Leeuwen was one of the editors, using a pseudonym. See below the final paragraph of his review of the publication ‘The Gays’ by Mr. S. (Benno Stokvis) from number 1.
Again an honourary salute for the work of Mr. S., whose person
became sympathetic to us by the establishment of this publication, which
we already predict, will bring him more abuse than success,
indeed already brought it to him because it was published in the first place
(see Preface). Let Mr. S. be assured, however: the
Truth always triumphs! The Samaritan, who shows compassion
is more than the Priest and the Levite, who, pass the victim
with contempt and anathema. And in these matters,
today's church is the successor of Priest and Levite;
not of the Merciful Samaritan.
During the war Van Leeuwen became involved in the Parool group. In the autumn of 1941 his distribution group is betrayed and arrested. Because nothing could be proven against him and he resisted vehemently and remained unbroken, he was released after seven months. Fortunately he had hidden all books and documents that could compromise himself and his group at his parents' house in Zeist. Everything that was in his room in Amsterdam with the Addicks family, was destroyed or taken out. In 1943 he went in hiding again when he found out the Sicherheitsdienst was looking for him. The correspondence that was kept shows that during the occupation he continued networking. It was helpful that even before the war he had already been used to leading a double life. Much of the communication by letter was exchanged through poste restante agreements with newspaper stand holders, for instance on the Spui in Amsterdam. Before, during and after the war he also used the alias Arent van Santhorst.” In the Goffert district in Nijmegen a boulevard is named after him; in the same district Benno Stokvis, Niek Engelschman, Anna Blaman and mayor Dales also had streets named after them.
The boulevard in Nijmegen named after Van Leeuwen (Picture: Rob Essers)
www.publiek.coc.nl/cocupdatepdf/Update2005-4.pdf (Rob Tielman Jan Carel Warffemius)
www.gaypnt.demon.nl/straatnamen/E.html (streetnames Nijmegen)
e-mail Rob Essers (picture Van Leeuwenlaan)
Joop Leker (29 July 1920 - 23 april 2008)
Joop Leker, 2007 (Picture: Mireille Vroege)
Joop grew up in the popular Amsterdam neighbourhood 'Haarlemmerbuurt'. There were five children in the family: three sisters, Joop and a disabled brother. His father had a jewellery store, which he neglected because of drinking problems. At the age of fourteen Joop started working. When he found out he was gay he had a hard time. His mother and his sisters accepted his orientation, his father did not. He continued working, followed evening classes and after the war rose to personnel manager at a machine factory in Delft. Later he became a successful businessman. Joop was present when the COC was founded in 1949. At that time he used the pseudonym: 'Joop van Delft'. During the years 1954-1956 and 1961-1963 Joop Leker was a member of the board of directors. He played a role in the discussions about the new course the COC would take from the sixties on, during the chairmanship of Benno Premsela. In 1991 Joop's friend Peter Overbeek died of aids. They had been living together for thirty years. Year after year, on Aids Memorial Day, Joop would read out the names of the aids victims.
During the war Joop didn't join the resistance, but he did commit acts of bravery. At the beginning of the war Joop Leker stood up for his Jewish aunt Stella, head purchaser at the Bijenkorf department store in Amsterdam. In the spring of 1941 the company, mainly Jewish, got German 'Verwalter’ (manager). He was supposed to lay off Jewish employees. But the man was a 'good’ German. Joop went to talk to him and it was agreed that his aunts' salary and that of other laid-off Jewish staff would be continued. When she went into hiding, he brought the money to her hiding place every month. Reportedly he did this also for his aunts' colleagues.
During a raid in the spring of 1943 his Jewish relatives were taken from their house. When Joop heard about this 'he ran to the house, ignored the snarling policemen, saw the youngest son standing beside his mother and grabbed him. Together they walked away; a bold and extremely dangerous action.’ Joop brought the boy (3) to the house of a friend. When their new neighbours in Zwanenburg turned out to be NSB-members (Dutch nazi party), they brought him to another hiding place in Katwijk. The little boy was saved. His older brother was gassed on 16 April in Sobibor, his parents at the end of August in Auschwitz. His sister (19) died at the time of the liberation of Auschwitz, in January 1944. Later on Joop would always ask himself why he had not married this sister earlier.
During the summer of 1943 a friend he knew from the sea scouts, Freek, was captured and sent to Camp Amersfoort. Freek worked at the airplane factory Fokker, a ‘kriegswichtig’ (important for the war) company. As with his aunt, Joop went to the manager again and got him to write a letter saying Freek was indispensable for the company. Joop had the German authorities in The Hague stamp the document and took it to the camp commander. A few days later Freek was a free man. They stayed friends for life.
Peter Brusse, ‘Uit het leven’, Volkskrant 24 May 2008 http://www.coc.nl/dopage.pl?thema=any&pagina=viewartikel&artikel_id=2315 www.joodsmonument.nl http://geschiedenis.vpro.nl/programmas/3299530/afleveringen/1132219/items/9070561
Henri Methorst (1909-2007), publisher and interpreter from The Hague, provided a safe house for the psychiatrist Coen van Emde Boas and his wife. He was given the Yad Vashem decoration. After the war he became a prominent member of the COC, through his work for the International Committee for Sexual Equality (ICSE). Henri was one of the first Dutch interpreters when the various bodies of the European Community were starting up.
Homo-Encyclopedia of The Netherlands (2005)
Henri Methorst (1909-2007), publisher and interpreter from The Hague, provided a safe house for the psychiatrist Coen van Emde Boas and his wife. He was given the Yad Vashem decoration. After the war he became a prominent member of the COC, through his work for the International Committee for Sexual Equality (ICSE). Henri was one of the first Dutch interpreters when the various bodies of the European Community were starting up.
Homo-Encyclopedia of The Netherlands (2005)
Gé Winter (1909-1992),
Amsterdam amateur actor from Amsterdam, and Mau van Spiegel (1904-1981), a dance teacher from Deventer, began a relationship in 1940. Mau was Jewish and went into hiding at his friend's house. Gé Winter saved the lives of Van Spiegel and many of his relatives as well as others. For this he received a Yad Vashem decoration from the Israeli gouvernment. After the war the two friends performed at COC parties as the comic duo 'The Ladies Van Pothoven to Ruigenhoek', impersonating two aristocratic women.
Wolfgang Frommel (Source: www.gaynews.nl/article04.php?sid=669)
Frommel, inspired by the German poet Stefan George and a conservative elitist in his opinions, was not uncritical towards national socialism in his publications. After Hitler came to power he accepted a position in Frankfurt as a radio broadcaster (1933-1934). The name of his program was 'Vom Schicksal des Deutschen Geistes' (The Fate of the German Spirit). Frommel wrote newspaper articles as well. Expert Michael Phillip remarks about this period that Frommel's attitude towards the Nazis was one of conformity. He cherished contacts with the Frankfurt Hitler Youth (HJ) and with high ranking officials of the party in Berlin. A local leader of the HJ, Sven Schacht, was his lover. After the Röhm Putsch in 1934 (a political purge by Hitler against his opponents), Sven became victim of the anti-gay measures and died in the Mauthausen concentration camp.
Frommel too came under the suspicion of the local Gestapo who understood that he was gay. He also had contacts with Jewish friends. Frommel went to teach in Greifswald University in 1934 and worked for the National Radio in Berlin. Later he went to Switzerland (Basel), Italy (Florence) and France (Paris). He came back to Germany and left again in 1937 for France. In 1936 the Nazis had put his best known publication, Der dritte Humanismus (The third Humanism) on the blacklist.
Frommel fled to Holland in 1939. He decided to stay. He was a member of the artists' colony of Bergen. He lived at 'De Zonnebloem' (The Sunflower), the house of the painter Etha Fles. She took him in at the recommendation of the then nationally celebrated poet Adriaan ('Janie') Roland Holst who knew Frommel from a visit in 1925.
F.W. Buri (exp. Castrum-NIOD)
In Bergen Frommel began to assemble a circle of young friends around him (Marita Keilson-Lauritz, 2006). From Bergen they were the young couple Vincent Weyand (Bergen, 31 October 1921), son of a painter, and Chris Dekker (1922-1996). There were two of his old friends from de Quaker School in Ommen for refugees, mainly from Germany: William (Billy) Hildesheimer and Adolf Wongtschowski ('Buri'). Frommel had helped them to flee to Holland. Since 1937, the painter Buri worked as a teacher of textile arts in Ommen, and Billy was a music teacher. He organized musical-like productions, as he also later did in the German camp for interned. There he survived the war thanks to his earlier achieved American nationality (with thanks to Rien Buter, July 2010).
In 1941 Gisèle van Waterschoot van der Gracht (The Hague, 11 September 1912), who lived in Bergen with her parents since 1940, made a portrait of Roland Holst. Roland Holst asked them to meet a German friend, Wolfgang Frommel, a Protestant poet from Heidelberg. Gisèle did so and offered him her help, if needed. From 1940 on she had a pied-à-terre on the third floor at the Herengracht 401 in Amsterdam. Only a couple of rooms and no kitchen.
From January 1942 on the coastal regions were cleared of Jews and the Jewish citizens of Bergen had to leave their homes on April 22. Frommel also felt insecure and moved in with Gisèle in the apartment on the Herengracht. This also applied to another gay German writer, Wolfgang Cordan (see below) with whom Frommel was in contact since 1940 based on their kindred spirit. Marita Keilson-Lauritz assumes that Cordan was the first to find refuge with Gisèle and that he left when Frommel came. Buri (Frankfurt 1919) soon followed. He had left Ommen in September 1940 and found shelter with artist Charles Eyck in Limburg. When the 'Jewish Star' was introduced on 1 May 1942 it became unsafe there. Frommel visited him and invited him to come to Amsterdam.
This was far from easy. Vincent Weijand agreed to travel by taxi past a pre-arranged place near Sittard on his way to the station, and on impulse take Buri along as a hitch-hiker. At the station, Wolfgang Frommel awaited the two young men and took them to Amsterdam. He used a yellow band which he still kept from his military service in Hitler-Germany. Meanwhile Charles Eyck had discovered a letter of Buri saying that he planned to commit suicide. Gisèle welcomed the heroes with red roses. It happened on 8 July 1942.
Thanks to the research of his American daughter, Francesca Rheannon, we know that Joseph Antonius Hubertus Maria (Guido) Teunissen (Weert, 1917-1979) and his wife Miep (Wilhelmina) Benz (1920- ), lived on the fourth floor of the house at the Herengracht since 1939. When Buri joined Gisèle and Wolfgang, the neighbouring couple was informed about the political background of the two Germans. Guido was a skilled carpenter, although he was working as a bicycle messenger, and Frommel asked him to help them build inventive hiding places. For example Guido built a hiding place in the pianola, by taking out the little motor. This place would save Buri's life during a raid. Miep worked in the Jewish-owned department store 'De Bijenkorf' (The Bee-Hive). The couple proved to be reliable and became important for the beginning Castrum project. Their floor became part of the hiding activities of the Frommel circle. As Marita Keilson writes, Joseph was called 'George' and later (by Percy Gothein) 'Guido', the name of an intimate friend of Dante. The name Castrum Peregrini, by the way, comes from the magazine which started in 1950. The first two floors of the house were occupied by people who had no connection to the group of Wolfgang Frommel.
Castle Eerde and De Esch
Kasteel Eerde en De Esch (inset)
Drawing by Haro op het Veld from 1952 (source: www.haroophetveld.nl)
Frommel and Cordan were in contact with a boarding school in fairytale 'Castle Eerde' on the Van Palland estate in Ommen. They were invited to give lectures, but never got an appointment as teachers. On 12 October 2007 a discussion was held at the NIOD (Dutch Institute on War Documentation) about the origin of the school. During the early thirties the Quaker communities in the United States, Britain, Germany and the Netherlands wanted to start an international school in Germany, with a democratic foundation and English as the official language. The diploma was the internationally acknowledged Oxford School Certificate. The nazis, of course, refused. So the school started in Holland. The Quaker teachers from nazi Germany were appointed there. Children of Jews and other opponents of Hitler who had fled became students; a few Dutch and English children came to Eerde as well, drawn by its high academic standards. On April 4, 1934 the school was opened by Quaker leader Pieter M. Ariëns Kappers: 'Du kennst keine Völker, Du kennst keine Rasse' ('God, You don't know nations, You don't know race').
During its heyday there were 120 pupils, taught and educated by 20 to 30 staff members. The school operated in the tradition of the progressive German 'Landeserziehungsheim' (National Educational Home). Because of the German occupation the Quaker school in Eerde was in danger, especially after the German racial laws were introduced. The Quakers did not want to close the school, however. The occupation did not seem too bad and where would go with the staff and teachers? Ariëns Kappers personally contacted the German occupier in the best Quaker tradition of silent diplomacy and good trust. A friend and fellow student, member of the SS, was Kulturbeauftragte (cultural representative) with Seyss-Inquart (the Reichskommissar during the occupation of the Netherlands). In September 1941 Jewish children were forbidden to continue lessons in non-Jewish schools. Eerde followed the ban. Many Jewish children had already been returned to their parents before May 1940, but 9 of them still remained in Eerde, as well as 3 teachers. They were separated from the rest of the students in the children's home 'De Esch', elsewhere on the estate, thus becoming the Ommen Jewish School. Teachers and students promised not to escape or go into hiding. Tragically the nine pupils who kept their promise became victims of the Holocaust. The three teachers and most other pupils survived. Four of the survivors belonged to the circle of friends around Frommel and Cordan.
One of them was Claus Victor Bock, born in Hamburg (1926) who arrived in The Netherlands via Brussels. His father was Czech, his mother German. On 21 September 1938 the Jewish family fled Germany, just in time. The last day of that month the Treaty of Munich would be signed and from 22 September on Belgium would not allow Czech passport holders to enter the country. Father Bock was a merchant in chemicals and had business contacts in Brussels. By order of a Belgian firm he was able to go with his wife to the British Indies for one year, as was thought at the time. They thougt it would be better to leave young Victor in Europe. But where? The Quaker Boarding School in Eerde, established especially for German refugee children, seemed a likely choice. The house-mother of the school, Josi Warburg, had been a class mate of mother Bock. In the spring of 1941 Frommel gave a lecture at Castle Eerde and had, as Marita Keilson puts it, a turbulent-erotic encounter with Claus. In his memoir Untergetaucht unter Freunden (In hiding among friends) (1985) he describes this encounter as a 'spark that ignited'. Frommel adopted him as a disciple and in August 1942 found an address for him in Bergen, with the family Dekker-Maathuis on the Guurtjeslaan. Rheannon writes that Frommel asked Guido to build a hiding place there as well, under the floor boards of son Chris' bedroom. It was also Guido who brought Claus to the Dekkers. In February 1943 Claus Bock joined the group on the Herengracht and stayed with Guido and Miep on the fourth floor.
In March 1941 Wolfgang Cordan also gave a lecture in Ommen. Now, as Keilson writes, another pupil makes a big impression: the 17-year old Johannes Piron (fathers name: Kohn). From this encounter a life-long relationship arises. A second 'unexpected following' (Cordan) occurred because in the form of his friendship with Thomas Maretzki, a Jewish pupil of the school, who just graduated but had not found another place to live yet. After the introduction of the Yellow Star (May 1942), Cordan persuaded him to leave Castle Eerde and join him in Bergen. At first they found shelter with an old friend of Wolfgang, Theo van der Wal (see below), and later with the mother of Chris Dekker, who belonged to the circle of friends around Frommel.
Polderhof in Bergen
Polderhof in Bergen
Chris Dekker by Haro op het Veld
Source: www.haroophetveld.nl - Van Gruting Publ.
Next Chris rented a house on the outskirts of Bergen, where Wolfgang Cordan could live with his protégés. Johannes Piron ('Angelo') came, as well as the German-Jewish student from Ommen Liselotte Brinitzer. Eva Kohn, a sister of Johannes, brought her to Frommel in Bergen. Cordan called the house the 'Polderhof'. Between his stay with the mother of Chris Dekker and his move to the Herengracht, Claus Bock also lived at the 'Polderhof' for a short while. Because of the impending evacuation of Bergen, the house was closed in 1943.
The Frommel circle
Manuel and Peter Goldschmidt, 'half-Jews' according to Nazi laws, were also student from Eerde. They belonged to the Frommel circle. Their non-Jewish mother arranged safe papers. Their non-Jewish appearance made it possible for them to leave Ommen without going into hiding. Manuel lived in a boarding house on the Amsterdam Singel and was a regular visitor to Herengracht 401. So was his brother Peter. Other friends and frequent visitors included Reinout van Rossum du Chattel and, from Bergen, Chris Dekker and Vincent Weyand. When the author Percy Gothein visited them in November 1943, a special picture of the men from the Frommel circle was taken in the kitchen of Miep and Guido Theunissen (Keilson-Lauritz 2006; picture below).
Back row (left to right): Vincent Weyand, Peter Goldschmidt
Middle row: Reinout van Rossum du Chattel, Manuel Goldschmidt, Chris Dekker
Front row: Friedrich W. Buri, Wolfgang Frommel, Percy Gothein, Guido Teunissen
(Source: www.castrumperegrini.nl and Peter Elzinga)
De Esch student Clemens Brühl finally, arranged his own hiding places, and became active in the Dutch Resistance. He kept his contacts with Wolfgang Frommel and his circle. Another Jewish student, who also went in hiding on his own, thought that the contacts of Frommel and Cordan in Ommen were characterized too much by a gay atmosphere for him to want to be part of it.
On 10 April 1943 De Esch was cleared. The remaining residents, as agreed with Ariëns Kappers, went 'voluntarily' by public transport to camp Vught. From there the group ended up in camp Westerbork. There they read together Latin writers like Tacitus and Sallustius and books by Fichte, Goethe and Tolstoy. Three of them were murdered in Auschwitz later that year on 24 September. The last of them, Hermann Isaac, died during the liberation of that camp, on 21 January 1945 (see: www.joodsmonument.nl at Eerde).
Rheannon describes how Claus and Manuel with Buri, as Germans, belonged to the inner circle around the charismatic leader Frommel. In the second circle the young Dutchman Vincent Weyand (or Weijand) was the primus inter pares - Frommel's favourite. But he did not live at the Herengracht. He lived in Bergen and later in a room on the Singel. He was a son of the painter Jaap Weyand and his Jewish wife, and therefore half-Jew according to the Nazis. Gisèle was the 'mother' of the circle, and she was important because of the help and resources she provided. Fellow artists who did not join the Kulturkammer, like Mari Andriessen and Adriaan Roland Holst - Roland Holst later on did join under pressure - supported her with food coupons; as did Adriaans' brother Eep. But neither Gisèle, nor Miep Benz, as women, were allowed in the all-important nightly poetry readings. These readings were the main social activity. Guido, although not an intellectual, was part of them, since he was a man.
Frommel taught the group of Jewish and non-Jewish, German and Dutch young men about the works of Goethe, Hölderlin, and George. Or as Gisèle said in the radio broadcast, "As a friend, father and professor he teaches them about Greek culture". In the same broadcast Manuel Goldschmidt compared it to a 'Hebrew school', and described his experience as follows: "When we were reading poetry we were invisible". Outside there were raids and also the house is searched. But inventive hiding places have been made, such as the hollow pianola and the hidden elevator shaft which led up to the attic and beyond.
According to Claus Bock, the search on 15 October 1944 followed a nervous reaction by the 'Grüne Polizei' (German Order Police in green uniforms) who thought they heard the sound of a radio transmitter. It was the typewriter Buri was working on. Francesca Rheannon talked about this to Miep Benz who was able to tell her the real facts. Miep showed her a note which was given to her after the war by former mayor Voute which said: "There are people in hiding at Herengracht 401". The note came from the files of the Sicherheitsdienst at the Euterpestraat and was written by a distant relative of Miep. She stayed in the Schiller Hotel on the Rembrandt Square, which was owned by a common relative. Miep and Guido were invited to come once a week for a good meal and met the woman there. She fell in love with Guido and was dissappointed when he refused to sleep with her. She was informed about the fact the two helped people in hiding and sent the note to the SD (e-mail Francesca Rheannon, 10 January 2008).
On 12 October 2007 at the NIOD Claus described the action: "The German officers went upstairs. On the fourth floor Miep Benz opened the door, in a pink nightgown and obviously pregnant. They left that floor at and went to the third floor. Gisèle opened and was held at gunpoint by six policemen. Her mother was Austrian, so she was fine. Meanwhile Buri managed to hide in the pianola. In the house, a portrait of Hitler was glued on the backside of a portrait of Stefan George. It was quickly turned around. One of the officers said: 'Nichts los also' ('Nothing's the matter') which made Buri crawl out of the pianola. Just in time he saw who the visitors were and he crawled back. In the kitchen was Wolfgang Frommel, a German citizen. He declared - truthfully - they were just having a Nietzsche circle in the house, for three days later it would be the philosophers birthday. Manuel Godschmidt (apart from being half-Jewish) was also a German citizen. Reinout van Rossum had a well-forged exemption from the Arbeitseinsatz (forced labour). Now it was the turn of Claus Bock. He was asked for his papers. It was an expired Czech passport. Frommel told them Bock was a Sudeten German (German minority in the north of Czechoslowakia) on the run. 'With papers?' 'No'. The German policeman went silent but was obviously impressed by the situation and advised Frommel to get a 'Fallschirm' for Claus; a 'parachute', i.e. forged papers. Then he asked who lived below. 'Alsema? On to them!'"
The website of the current Castrum Peregrini describes what the members saw as the core of the underground commune: life with poetry and the visual arts and the in-depth study of the work of Stefan George (1868-1933). After the war this continued and the house became the institution it is today.
Contacts with Max and Quappi Beckmann
Contacts with Max and Quappi Beckmann
Max Beckmann, Max Beckmann, Les Artistes Max Beckmann, Gisèle, (1946)
Triptiek Schauspieler (Source: Kemper Artmuseum, St. Louis) (Source: Castrum)
During the war, Wolfgang Frommel and Gisèle van Waterschoot van de Gracht regularly visited the German refugee painter Max Beckmann and his wife ‘Quappi’ Kaulbach. The couple lived at the Rokin 85. It is possible that in 1941 Beckmann visited the exposition of Gisèle's paintings at Art Dealer Van Lier, the same gallery where he had a solo exposition in 1938.
Frommel appears on two of Beckmann’s paintings. First on the left panel of the triptychSchauspieler (Actors) (1941-1942), and secondly in the painting Les Artistes mit Gemüse (The Artistes with Vegetables) (1943). In both cases Frommel takes up an almost priest-like position towards the others depicted. On the Schauspieler he raises his finger against a warrior in medieval dress, who seems to be arresting him. A woman stands in between, praying.
In the other painting he sits at a table, with Max and two other exiled painters: Friedrich Vordemberge-Gildewart and Herbert Fiedler. Each is holding an object. Wolfgang Frommel holds a loaf of bread which he seems to be breaking, like Jesus for his disciples. Beckmann wrote about Frommel in his diary on the 16th of February 1943: ‘Fr. was here, someone who has a true relation with my paintings’. In 1945 Beckmann made a drawing of Gisèle as well as of Wolfgang.
Wolfgang Cordan Charles Eijck, drawing for Wolfgang Cordan (ca. 1960)
(during war) Cordan's Muschelhorn (1944) (Source: www.gaynews.nl/
(exp. Castrum-NIOD) (exp. Castrum-NIOD) article04.php?sid=669)
Hekma also writes about two other literary personalities. Wolfgang Cordan (alias for Wolfgang Heinrich Horn, 1908-1966) was a German journalist, poet and writer who fled to France in 1933 and from there to Holland in 1934. As Marita Keilson-Lauritz relates, the writer Jef Last and, through him, the young journalist and author Theo van der Wal, were Cordans hosts. In France he wrote a booklet against the Nazis, 'L'Allemagne sans masque' (Germany unmasked) (1933) with a preface by André Gide, and in Holland he wrote an essay on surrealism (1935). This book instantly made him an avant-gardist in Holland. He also was editor of the leftist literary magazine The Fundament, published by Contact (1934-1937). Dutch writers as well as German exiles like Klaus Mann and Willy Brandt and the French surrealist Louis Aragon contributed. Between 1937 and 1939 Horn was in Berlin. Back in Holland he set up the surrealistic-political magazine Centaur. The title refers to the mythological half man - half horse figure, one of which (Chiron) was the teacher of young heroes. The first two issues were published by the renowned publisher Stols in Maastricht. Cordan also published in his magazine Halycon (1940-1944), wrote Spiegels (Mirrors) on modern Dutch and Flemish poetry, and a lot more.
In 1940, through Adriaan Roland Holst in Bergen, he met Wolfgang Frommel. Both refugees were of like mind in literary and erotic matters. Frommel experienced the flash of lightening that Cordan mentioned in a letter to Roland Holst. Cordan meant by this the 'deepest contact between two men of about the same age and of equal spiritual development, a flash of lightning which breaks down all barriers and fuses two natures' (July 4, 1940). Despite disagreements, Frommel contributed to some of the Centaurissues during the war. It is possible that in 1940, Cordan was not yet a practicing gay, but he showed a certain preference as in the poem Die Insel Urk.”.
Urk is also the place where the left-wing writer Jef Last and the painter Willem Arondéus found the love of men, a kind of Dutch Island of Capri:
"Es gehn mit schwerem blick die bleichen
Jungfrauen durch die alten Gassen,
Die Burschen hinter Ställe schleichen
Und müssen sich verliebt umfassen.
Es spricht der Pfarrer streng von Sünde,
doch haben nur die Frauen glauben -"
(from: 'Das Jahr der Schatten', 1940)
(The pale girls walk with eyes downcast
through the old alleys,
the boys sneak behind sheds
and have to lovingly embrace each other.
The vicar harshly speaks of sin
but only the women have believe -)”
For a short while (ca. June 1942 - February 1943) Wolfgang Cordan formed a circle of friends (on the Polderhof) similar to the one Frommel would maintain for a long time at the Herengracht.
In a lecture at the NIOD (12 October 2007) the differences between Cordan and Frommel were pointed out as follows: Wolfgang Cordan thought that Frommel, with his contacts in high-ranking nazi circles, was unsuitable for the front-line resistance work in which he himself was engaged. On the other hand Frommel thought that Cordan had left 'the mountain of the poets' and therefore was not suitable for activities in the pilgrims' castle at the Herengracht. Marita Keilson-Lauritz in her publication (2006) poses a question that applies to both men: didn't they only save their darlings, the good looking boys? As one of the survivors said: was Clemens Brühl not beautiful enough to get help in hiding? Another question is whether we can approve of the homo-eroticism with minors, inspired by Stefan George (like Frommel's relationship with Claus Bock). Maybe not. Fact is that most of the protégés of Wolfgang Frommel and Wolfgang Cordan survived the war and the persecution of Jews. And their testimonies on the friendship and inspiration they experienced during this period are very positive. This also applies to Claus Bock, who was then still a minor.
Cordan went his own way. Being a known progressive artist he had into hiding. After a period in Antwerp, where he worked with the publishers couple Kollár-Veen, he moved in with Johannes Piron in Amsterdam, in the Euterpestraat. He owned this place to the Kouwenaar family, acquaintances of Roland Holst. Both become more and more involved in the resistance, especially in the National Armed Resistance Groups. This was also the case with Thomas Maretzki and a new protégé, the young helmsman Jan Monnier. With his little 'ark' of friends and with the gay poet Jac. van Hattum Cordan also published a resistance magazine Resistance and Construction (February - May 1945). After the war Wolfgang Cordan was one of the actors in a documentary by Max de Haas about the Christian an communist resistance.
Percy Gothein and Vincent Weyand
Percy Gothein and Vincent Weyand
Hekma describes Frommel as being the favourit of poet Percy Gothein, who in his turn for some time belonged to the circle of Stefan George. George was the poet who, adored by a small circle, shortly before his death (1933) was asked by Hitler to become the National Poet. Stefan George may have felt honoured but, because of various reasons, refused.
Percy Paul Heinrich Gothein (1896-1944) was the son of Eberhard and Marie Louise Gothein, Heidelberg intellectuals. Eberhard Gothein was a noted sociologist and had Jewish roots; Marie Louise Gothein wrote the definitive study of English gardens. Stefan George often visited the family and 'discovered' the child Percy as a rare poetic soul. Percy became an author. As Rheannon writes, he was unable to get work in nazi Germany because he was 'non-Aryan'. Gothein went to Italy, where he lived until 1943. He had to flee when the American and British troops arrived. Gothein went to Stuttgart and briefly got a job at the Württemburg provincial library. Then Wolfgang Frommel wrote to him from Amsterdam with a proposal to have some of his work published in Holland. Percy, who had visited Frommel once in 1943, went to The Netherlands in the spring of 1944 and lived in the house on the Herengracht, in the front room of Miep and Guido. The men became close friends. Gothein was the first to use the name 'Guido'. Initially Miep, as a woman, had to move elsewhere. Gothein also met other people in the Frommel circle there. In 1944 he followed two of them to Ommen and was arrested there. Wolfgang Cordan writes in his diary:
Back row: Wolfgang Frommel, Martijn Engelmann, Guido Teunissen
Front row: Haro op het Veld, Percy Gothein (1944)
(Source: coll. F. Rheannon)
“Ommen, 30 July 1944 (more likely July 25, ed.). Sad symbol: during the night Percy Gothein has been caught in the act / shaven and locked up in the local concentration camp / with him two guys, one of whom was truly the corpus delicti ... this is a crime - not in a juridical but in a philosophical sense ...". Rheannon explains that Gothein left Amsterdam because of discord with Frommel. The attack on Hitler (20 June 1944) could have been the cause for the arrest. Gothein had connections with the Kreisauer Kreis who prepared the attack. Through him in 1944 a letter for the Britisch Government went to the poet Geerten Gossaert (see www.dbnl.org). After 20 June Gothein fled to Castle Eerde in Ommen, where Vincent Weyand and Simon van Keulen were also staying in order to evade forced labour. Gothein knew Simon from a street encounter. They lived in villa De Esch. The 9 remaining Jewish students of the Quakers school in April 1943 had been transported to camp Vught in April 1943. Maybe Vincent and Gothein thought that De Esch would be safe after the raid.
Francesca Rheannon suspects that Gothein was reported by someone in Ommen. 'The police came to the house where he was with Simon to ask him to come into police headquarters the following day. They found him in bed with Simon, and somehow the news got to the Gestapo'. Simon later claimed that 'he and Percy were in bed together only because there was no other bed around'. But Gothein had been arrested in Germany 2 or 3 times under the infamous anti-gay paragraph 175, and he made no secret of his sexual orientation - possibly one reason why the poet Stefan George rejected him in the 1920's'. George was very secretive about his own gay feelings.
From Castle Eerde Gothein and Van Keulen were transported to the cruel Erika camp, a penal camp in Ommen, manned by mostly Dutch personnel. Four days later, on July 27, Weyand was arrested as well and brought to Erika in the car of camp commander Werner Schwier. Rheannon was told by a Vincent's brother Olaf Weyand that the Dutch guards accused the young men of being gay and beat them up. Gays were often beaten up in the camps, by the guards, their fellow prisoners, or both. Gothein had been separated from Simon immediately upon entering Erika. Simon reported seeing him a few days later from a window and that he had looked very 'bad'.
Vincent Weyand (1944) Cover book about Weyand
(Source: Castrum-NIOD) (Publ.: Van Gruting - ISBN-13: 978-90-75879-43-8)
When the news of the arrest of Percy, Simon and Vincent and their transport to the infamous camp in Ommen reached the Herengracht, precautions were taken. Maybe, under torture, information about the circle around Frommel and Gisèle would come out. Buri and Bock went in hiding elsewhere. The fear turned out to be unfounded and the two returned in September.
During that time they found out, Simon van Keulen was in camp Amersfoort. Gisèle, after waiting a morning at the Sicherheitsdienst, managed to get a permission to visit him. The permit was signed by the chief, Willy Lages. On 12 September she went to the camp by tandem bicycle with Guido Teunissen. She introduced herself as a the friend of Lages, bribed the guards with cognac and cigarettes, made use of the fact that the brute Kotälla was drunk, and had a 5-minute conversation with Simon. She could tell the badly beaten young man that the hiding at the Herengracht still was functioning and that nobody had talked. On 19 October Simon jumped from the train which would have transported him to Germany, and appeared at the Herengracht 'like a ghost' Herengracht.
Monument for gay victims Neuengamme (picture: fcit.usf.edu)
Percy Gothein was transported to Sachsenhausen and from there to the Neuengamme camp where he died on December 22, 1944. During that time Willem Niemeijer stayed there as well. On August 18 Vincent Weyand was brought to the Amersfoort camp and from there, ten days later, to the Dutch deportation camp for Jews, Westerbork. On September 4 the last train to Auschwitz departed. But Vincent was on the train that left the camp on September 13 to the, relatively mild, Bergen-Belsen camp. The train also transported 77 children from the Westerbork orphanage, a group of diamond workers, and a group of 44 Turkish Jews (see chapter on Turkey). Vincent Weyand, who had been arrested as a political prisoner, was deported again. He died on February 21, 1945 in Buchenwald.
After the war After the war, Wolfgang Frommel became the leader of what would become known as Castrum Peregrini and its magazine of the same name, published in German (1951). Shortly after the liberation Frommel was thought to be a German soldier and almost thrown into the canal. Cordan was involved in re-establishing the literary magazine Centaur. After his years in Holland Cordan wrote several books on the Mediterranean countries and two novels, one of them the homo-erotic Julian der Erleuchtete (Julian the Enlightened) (1950).
Guido Teunissen left Miep and went to the US. Miep Benz married Chris Dekker. In 1973 Wolfgang Frommel received a Yad Vashem decoration from the State of Israel for hiding and helping Jews during the period of the Holocaust.
Afer the War
7 May 1926~ 5 January 2008
Claus Victor Bock (Picture: www.castrumperegrini.nl)
After the war Victor Bock (Hamburg, 7 May 1926) went to his parents in India, who worked there since the late thirties. After a year he returned to Amsterdam to study literature.
He continued his study in Manchester and Basel, where he obtained his doctorate in Germanic studies (1955). In Engeland he worked mainly in London. For eight years he was the director of the Institute of Germanic Studies. In 1980 he became Dean of the Faculty of Arts. In 1984 he went into early retirement to return to Amsterdam, to Castrum Peregrini.
He published his account of the time he was in hiding - Untergetaucht unter Freunden (1984) - and carried out several activities for the Foundation and the publishing house. His book was translated into Dutch in September 2007: As long as we write poems, nothing will happen to us, Amsterdam 1942-1945. Claus Victor Bock died unexpected and peacefully on 5 January 2008, in the house where he had been in hiding during the war.
In 2007 the NIOD (Dutch Institution for War Documentation) together with Castrum Peregrini organised an exposition about the work and friends of Gisèle van Waterschoot van der Gracht. Some pictures and information in the text are from this exposition and some information is derived from an afternoon seminar at the NIOD on 12 October 2007.
Gisèle d'Ailly-van Waterschoot van der Gracht (Source: www.onsamsterdam.nl)
www.gaynews.nl/article04.php?sid=669 (article Ger Hekma, picture Cordan)
E-mails of Francesca Rheannon (November-December 2006 - Guido Teunissen, Vincent Weyand, Castrum)
Marita Keilson-Lauritz, Centaurenliefde, in: Het begint met nee zeggen, Schorer Boeken (p. 191-214)
http://geschiedenis.vpro.nl/programmas/3299530/afleveringen/5950244/items/7183843 (VPRO radio broadcast, May 26, 2002: 'Spoor terug, Castrum Peregrini')
www.hko97.nl/Archief/artikelen/eerde%20en%20pallandt.htm (Harry Woertink about Ommen)
Exposition and catalogue Max Beckmann in Amsterdam 1937-1947 (Van Gogh Museum, April 2007)
Exposition about Gisèle van Waterschoot van der Gracht, NIOD (April-October 2007) and a seminar on 12 October 2007
www.fraenger.net (Frommel, Gothein)
The Encyclopedia of Righteous among the Nations, Rescuers of Jews during the Holocaust. The Netherlands. Yad Vashem, Jerusalem 2004 (Frommel)
www.filmtotaal.nl (an armed resistance group)
www.ogs.nl (Weijand, Castle Eerde)
www.onsamsterdam.nl (picture Van Waterschoot van der Gracht, 2003)
Castrum Peregrini is not mentioned in the Dutch Homo Encyclopaedia.
Jewish Gays and Lesbians
L. Ali Cohen (1895-1970). Levi Ali Cohen was a lawyer who played an important role in the cultural life of the city of Haarlem. He regularly published in magazines and wrote several books. Martinus Nijhoff reviewed the collection of poems Reflexes (1925) and the short story Eros in Reykjavik (1931). He summarized the latter and gave his opinion. Homo-eroticism played an important role.
Cover 'Eros in Reykjavik' (Source: www.antiqbook.nl/boox/fok/18036.shtml)
Three ships are in the harbour of Reykjavik, a Norwegian vessel, a Danish warship and the ‘Eros’, a slender, white ship that came from Scotland under a far-away, unknown flag, with people who sailed all over the world. Aboard the latter (the Eros turns out to be a tourist ship) a night-long party is organized, to which girls from the shore and the crew of the other ships are invited. During one night of exceptional awareness under pressure of an heightened state of mind, several of the party-goers arrive at a more accurate evaluation of their own personality. Of course the next morning depression sets in. Some of the tourists go to a trip inland by car with the Icelandic girls to visit a spring that produces water only once a day. But a Danish naval cadet however, who during the night experienced a revelation about his own nature, tries to wash off what to him seems to lie in between a personal secret and a stain; dives into the icecold seawater in the harbour, gets cramps, and drowns’... In part because of Ali Cohen's florid style, his book does not touch us any deeper than with a sweet bitterness. Just like the galosh-wearer in the fairy tale by Andersen, we are forced to enter all the hearts too deliberately, one by one; with too much emphasis we are made aware of all these profound motives and tender feelings."
Ali Cohen survived the war and the Holocaust. After the war, the COC-magazineVriendschap published a short fragment from the book cited above (August 1951).
M. Nijhoff, Kroniek der Nederlandse Letteren III (p. 711-712, 714)
M. Nijhoff, Reflexen (Collected Work Vriendschap, magazin of the COC (August 1951) p. 7 (www.ihlia.nl/documents/pdflib/Vriendschap/1951/1951-08.pdf)
Jacobus Cohen (Heinenoord, 13 June 1877). Jacobus was a baker and lived in the Blasiusstraat 47. He was married to Joahnna Goudsmit (Wijk bij Duurstede, 5 September 1873) and they had four adult children: Mozes Nathan (1909), Israel (1910), Nathan (1913) and Sophia (1915). The children were born in Rotterdam, where the family had lived for some time. Cohen, who had a typical Jewish name and a 'J' in his identity papers, was arrested in September 1941 by the Amsterdam vice squad. At that time he was 64. The police handed him over to the German authorities who detained him. Jacobus was sent to Westerbork and was gassed on 1 October 1942 in Auschwitz. His son Mozes died in the same camp on 30 April 1943. His wife and his son Nathan were gassed on 21 May 1943 in Sobibor. His daughter Sophia was murdered the same year on 30 November in Majdanek. Israel Cohen succombed before 31 March in a labour camp in Central Europe.
Jacob Hiegentlich (1907-1940).
The Jewish Monument writes about him: ‘Jacob Hiegentlich was born April 30, 1907 in Roermond. His parents - the garment whole-saler Sallie Hiegentlich and his mother Rosalie Egger who died in 1927 - had five children. Four of them, like the father, would not survive the war.’
’Jacob Hiegentlich grew up in Catholic Roermond, in, in what he described himself as "confusing mix of Roman Catholic and Jewish events". He attended the Bishops College of Roermond but, because of problems with mathematics, did not finish his education at this high school.’ In 1923 his debut album of poems in German Die rote Nacht (The red night) was published. ‘At the age of 17, under the pseudonym of David Jozua de Castro, he wrote 'Het zotte vleesch' (The foolish flesh), a novel about the people of Limburg. In it the general practitioner Laurent Stijn, a friend of Hiegentlichs father, was depicted in a very unflattering way. Father Hiegentlich then bought up the entire edition.’
Jacob Hiegentlich, oil painting by Jules Rummens, ca. 1925
(44 x 33,5 cm, collection Ser J.L. Prop, Banholt) (Source: www.dbnl.org)
‘At the urging of his father he went to Amsterdam for the diploma Dutch language teacher, which he obtained on the 17th of November 1930. In Amsterdam he was an active member of the Dutch Zionist Students Organisation (NZSO). He lived among circles of artists and Bohemians and he belonged to the ‘Reynders circle’, named after the famous café at the Leidseplein.’
’In 1932 he became a teacher at the Theosofic high school ‘Drafna’ in Naarden. The classroom-based educational system was contrary to his own strong feelings of individualism. From 1935 on he devoted himself solely to his literary work.’ He published poetry in the literary magazin the Nieuwe Gids. In 1937 his novel Onbewoonbare wereld(Uninhabitable world) was published and in 1938 Schipbreuk te Luik (Shipwreck at Liege). The novel Met de stroom mee (With the Flow) was published posthumously in 1946. Jacob was gay and his stories and novels show a Freudian involvement with subjects like sexuality and death.
‘Jacob Hiegentlich was a fervent supporter of zionism. Within zionism he chose the extreme and militant school of Revisionism under the leadership of Jabotinsky. He wrote numerous articles in common Jewish and zionist magazines, like Baderech, Hatikwah (the offical magazine of the NZSO) and Ha’Ischa. Especially for the Joodsche Wachter (the Jewish Sentinel, official magazine of the Dutch Zionist Union) he wrote political articles against the growing national socialist movement. He gave lectures on literature and Judaism and wrote several reviews.’ Not surprisingly he was a great admirer of Jacob Israël de Haan who was also gay and a zionist, as well as of his sister Carry van Bruggen.
’On the evening of 14 May 1940, Jacob Hiegentlich took poison. He was admitted unconscious to the Wilhelminagasthuis in Amsterdam, where he died on Saturday the 18th of May 1940, 33 years old. On the front of his parental home at the Markt 27 in Roermond a commemoration plaque was placed to Jacob Hiegentlich, ‘Roermond's writer’.’
Siegfried E. van Praag wrote about him: ‘Whenever I'm in Amsterdam, I miss him. He can't be found any longer in the Reijnder’s-café on the Leidsche Plein, not longer in his room in an old canal house. He no longer pours Bols [Dutch gin - ed.] in his landlady's glass, while he plays a typical, little known, record on his gramophone. And he no longer curls the lips in his eternal boyish face around strong cigars ... Yes, Hiegentlich was dressed flamboyantly, because he was inclined towards dandyism, the need for the chic appearance of an old-style boulevardier. Despite his inner feminine nature and probably to compensate for it, he loved bravado. His bravado and his conscious zionism did not allow him any camouflage of his personality and Jewishness.’
Digitaal Monument Joodse Gemeenschap (www.joodsmonument.nl)
Marina van der Klein, De Homo Commemorans en de bezetting: kanttekeningen bij een dominant discours (www.vertrouwen.nu/reactie_MarianvdKl.htm)
G.J. van Bork, Jacob Hiegentlich (www.dbnl.org/auteurs/auteur.php?id=Hieg001)
Samuel Hoepelman (Amsterdam, 28 juni 1896). Samuel was an office clerk and unmarried. He lived together with his elderly parents, Jacob Hoepelman (Amsterdam 1863) and Alida Hoepelman-Suis (same) at the Valckenierstraat 35-I. On 26 August 1942 he was arrested by the vice squad, on the same day as Isaäc Walvisch (see below).
Report on Samuel Hoepelman (Source: NIOD)
Van Opijnen reported to the Bureau of Jewish Affaires that Hoepelman 'on several occasions committed a sexual offence with Arian boys, and still frequents public conveniences to seduce them to sexual offences'. On this the Bureau (a certain P.K.) determined this Jew was a dangerous homosexual and a 'Volksschädling', a harmful element. He was to be permanently removed from society. On the day of his arrest Samuel Hoepelman was handed over to the Sicherheitsdienst. In December 1942 he was sent to camp Westerbork and from there deported to Sobibor on 20 April 1943. There Samual Hoepelman was gassed on 23 April, 46 years old. A few weeks earlier his parents were murdered in the same camp on 26 March 1943. A brother or sister of Samuel survived the Holocaust.
- NIOD, dossier Bureau Joodsche Zaken Amsterdam (5225/7313), with thanks to Erik Schaap, June 2010
24 September 1886~ 13 November 1942
Salomon Lam (Amsterdam, 24 September 1886). Salomon was a travelling salesman and unmarried. In 1941, according to www.joodsmonument.nl, he lived together with the young couple Springer and the family Sluijzer at the fourth flour of the Sarphatistraat 195.
His parents were Levie Lam and Sara Vleeschdrager. According to the Amsterdamse police report of 1942 his adress was Nieuwe Achtergracht 107 ground floor, the house of the aged couple Emanuel en Mirjam Mossel-Mulder (www.joodsmonument.nl).
On 26 August 1942 he was arrested by the vice squad, on the same day as Samuel Hoepelman and Isaäc Walvisch (see elsewhere). It was 'Kriminalbeamte' and head of the vice squad Jasper van Opijnen who arrested him.
Van Opijnen repported to the Bureau of Jewish Affairs, in similar words as he did with Hoepelman, that Lam seduced 'Arian' boys to sexual offences. The Bureau determined that this Jew was a dangerous homosexual and hadded to be permanently banned from society. Colleague Kaper from the Bureau also knew Salomon was a communist.
This word is underlined. Salomon Lam, Jew and gay and communist, was handed over to the Sicherheitsdienst the same day and deported via camp Westerbork to Auschwitz. There he died on 13 November 1942, 56 years old.
- NIOD, dossier Bureau Joodsche Zaken Amsterdam (5225/7317), with thanks to Erik Schaap, June 2010
Isaäc Metzelaar (Amsterdam 22 maart 1874). In 1941 Metzelaar lived at the Amstellaan 82-II, called Stalinlaan after the ward and since 1956 Vrijheidslaan. He was married to Hendel Limkowksi (Chrzanow, Poland, 27 December 1876); her addres was in 1941 Tweede Boerhaavestraat 77-I, where two more women her age lived. The couple were probably separated. Isaäc was arrested in July 1942 by the head of the vice squad, Jasper van Opijnen, on grounds of prohibited homosexual behaviour. From July (the 15th) the first trains left Amsterdam for Westerbork. Metzelaar was part of one of these transports – just like Jacobus Cohen. In Westerbork he belonged to the group of prisoners who were deported to Auschwitz on 24 July. Isaäc Metzelaar died there on 19 August 1942, at the age of 68. His wife died in the same camp on 14 January 1943.
Mina Sluyter (Amsterdam, 31 May 1916). Her name is known because of a annotation of the Bureau of Jewish Affairs on 24 July 1942: 'taken into custody because of homosexuality 24-7-1942, also Jewish, moved to the Sicherheitsdienst'. During the months of July and August 1942, according to the letter by Van Opijnen, more Jewish homosexuals were arrested. Mina Sluyter had visited an 'Arian woman' with whom she supposedly had a lesbian relationship. Mina was a seemstress and lived in 1942 at the Kerkstraat 378-II. She died two months after her arrest, on 30 September, in Auschwitz.
- NIOD, dossier Bureau Joodsche Zaken Amsterdam, with thanks to Erik Schaap, June 2010;
- Sytze van der Zee, Vogelvrij, De jacht op de Joodse onderduiker (Amsterdam 2010), p. 123
- www.joodsmonument.nl (with the name Sluijter)
Isaäc Walvisch (Amsterdam, 21 September 1888). In 1941, Walvisch, a merchant, lived at the Amstellaan 27-II, together with two other families. In 1942 his addres was Kromme Mijdrechtstraat 6-II. On 26 August 1942 Walvisch was arrested by the vice squad, the same day as Samuel Hoepelman (see above). He was detained only briefly and was sent to Westerbork on 10 September and the next day to Auschwitz. On 14 September 1942 Isaäc Walvisch was gassed.
David Waterman was also one of Jewish men who were arrested and handed over to the Sicherheitsdienst. He might have been the same David Waterman who was born in 1893 in Amsterdam in a family of 12 children; this David was married and had two daughters who married non-Jews and survived the Shoa. On 25 May 1943 Waterman ended up in Westerbork, possibly together with some family members. Fifteen people with the surname Waterman were gassed in Sobibor three days later, on 28 May 1943 but David was allowed to stay in the camp, in barac 65, and managed to avoid deportation. He lived to see the liberation of camp Westerbork on 12 April 1945. Waterman did not wait for permission to leave the camp and disappeared after a couple of days.
He had to be careful because the vice squad of Van Opijnen was still active until 1946 and the German regulation 81/40 remained until 1947. Finally in the sixties the social climate changed. However, it was not until 1986 that gays could apply for compensation for suffering during the war.
Hugo van Win (1920-2004). Hugo was the second child of the Amsterdam Jewish couple Salomon van Win and Elisabeth de Metz. During the thirties the family lived in the Den Textraat. At that time he attended the Regulierschool, together Benno Premsela, who later became an inerior disigner and a co-founder of the COC. During those years Salomon van Win started making cans which he filled with menthol liquorice, which became a flourishing business. Hugo also worked there for a short while after finishing high school, and his first job at a ladies'fashion store, which was about to send him to the Netherlands Indies. Meanwhile, war had already started.
Hugo’s parents were non-practicing Jews, but well aware of the dangers of their origin. In 1940 and 1941 his father, under an assumed name, had already rented living accomodations in case they had to go into hiding. It required a lot of money. In December 1942 Salomon van Win bought forged identity cards for the family, at 200 guilders each.
Before the war Hugo had already discovered, in the street, that he was gay. Everywhere in Amsterdam were ‘krullen’ (curls), public urinals. There he experienced the tension of erotic contact with men. Also ‘cruising’ was done in parcs or fields. After that he came into contact with men of his own age who always believed they were ‘the only gay in the world’. The boys sometimes dressed up as girls, used make-up and once went to the De Bijenkorf (Amsterdam's most famous department store) to stage a demonstration. Hugo also visited the gay-bar 'The Marathon' at the back of the Tuschinsky Cinema in the Reguliersbreestraat.
Homosexual contact by an adult (over 21) with a minor had been an offence since 1911. Therefore the police checked the 'curls' and regularly raided the gay bars. Boys under the age of 21 were taken to the police station where their parents could pick them up. The German laws, in force as of August 1940, made all homosexual contact punishable and raised the penalty from a maximum of four to ten years.
In November 1943 the SS-magazine Storm wrote that gays needed to be rooted out 'to the last man, as weed in the Dutch garden' (http://home.wanadoo.nl/gckool/a06.html). The magazine also published some articles on the gay bars which still continued their businesses. Under a picture from a little pub with the name 'Blonde Saar' they inserted the text: ‘Something that damages a healthy nation should be cut out’. The text in the small frame explaines that these places ruined the nation's health (see http://www.annefrankguide.com/nl-NL/bronnenbank.asp?aid=9086).
SS-magazine Storm, late 1943 (annefrankguide.com)
On many occasions Hugo had a lucky escape. In October 1941 he became an adult. He relates that in the room near the Concertgebouw, which his father rented for him from two ladies from the Netherlands Indies, he had his first experience of trying to make love. The boys kissed each other and that was it.
In June 1942 Hugo left Amsterdam to avoid forced labour and deportation, and with help of his mother's relatives, began work in a Jewish mental hospital, The Apeldoornsche Bos (The Apeldoorn Forest). Later in that year the trainee nurse received, through his father's secretary, a forged identity card and the of key to a hiding place in Apeldoorn, just in case. On the 20th and 21st of January 1943 the mental hospital, where Jews from Apeldoorn had also been rounded up, was prepared for evacuation and deportation. On the night of the 20th Hugo managed to escape to the hiding place and the next morning he managed to get on the train to Amsterdam. On the 22nd of January about 800 residents and 50 staff members were transported to Westerbork. Most of them were killed in the camps (see Abigael Santcroos*, Netherlands Antilles).
Luckily Hugo could return to his room at the Alexander Boersstraat, where his mother and sister soon joined him. Three other persons in hiding followed, and finally also Hugo's brother with a friend. Ten people in a house which officially could accomodate only two people. Hugo decided to leave. After a failed attempt he received a tip to contact a member of the Employment Centre in Hengelo, mister Maurits Staudt. He was able to provide a forged passport (in the name of Bertus de Witte) and other documents, which enabled Jews and others who wanted to go into hiding to join the employed workers in Germany. Hugo decided to go for this seemingly crazy proposal. His father and the others in hiding in Amsterdam were furious, but after all kinds of delays, on the 17th of August 1943 Hugo left for the lion's den.
On the papers Staudt had entered the German city of Balingen, far from the violence of war, near Switzerland. Van Win actually managed to get a job there, with a department of the Siemens company. He forged Dutch certificates and proof of ‘Arian Descent’ to make him look like the person in his passport. Hugo van Win worked in the financial department, gained trust, met some reliable Germans and made a career. He listened to Radio Oranje and secretely resisted by manipulating the accounting records. Escaping to Switzerland seemed impossible. Instead, on the 1st of Juli 1944, his boss sent him to heavily bombarded Berlin. Hugo could not refuse the ‘promotion’, but insisted on permission to look for a place to live on his own, just like in Balingen.
His boss in Berljn thought a labour camp was good enough for the novice. Hugo then went to the Gestapo and with some luck was given permission to rent a private room. He found one in the Sesenheimerstrasse, Charlottenburg. Van Win lived through over 300 bombardments. On October 6, 1944, he survived a direct hit on the air-raid shelter. During this insane period only one thing counted: staying alive. But also, just like so many young people in Berlin, he wanted to experience everything that life still had to offer. Every night Hugo visited a gay bar which, despite the severe nazi punishments, was still in business. One of the bars was 'Barth' in the Fasanestrasse. He also visited the ‘curls’ in Berlin, public urinals near underground and railway stations. Because Hugo had an appartment of his own, he could take men home with him unhindered.
Hugo took part in wild gay parties while the bombings went on. One of his most exceptional encounters was with a boy who was known as ‘Klosetta’. He was always flamboyantly dressed and often appeared as a woman. When the Russians were already in Berlin they had a conversation. Klosetta turned out to be Israel Cohen. He had chosen his striking disguise to save himself and his mother. In Berlin Hugo became involved in the resistance. On Wednesday nights he met a certain Klaus in café Bristol at the Kurfürstendamm, who needed secret lists and drawings of Siemens. This probably had something to do with the production of V-weapons.
On th 5th of May 1945, Hugo heard about the liberation of the Netherlands on a radio rigged up by Delft students. With five students he decided to get back to Holland by bicycle as soon as possible. That was very optimistic. They ended up in a Russian camp and were handed over to the Americans. The Americans then transported them to Wolfsburg. From there the arrived by freight train in Eindhoven on May 29. In Amsterdam Hugo’s parents, his brother and his sister, also turned out to have survived the war. But their house at the Den Texstraat was occupied by a NSB couple. The husband had fled to Germany but the woman still lived there. After a lot of pressure she eventually left, taking all the furnishings with her.
Hugo van Win (about 1960) (www.coc.nl/dopage.pl?thema=any&pagina=viewartikel&artikel_id=149)
On saturday the 7th of December 1946 the Amsterdam Shakespeare Club organised its first meeting in café De La Paix (Leidsestraat). Jef Last gave a lecture on ‘Liefde in Griekenland’ (Love in Greece). Hugo van Win became a member of the club, which was formed from the readers contacts of the pre-war gay magazine Levensrecht (see Niek Engelschman). In 1948 the name of the Club became ‘Cultuur en Ontspannings Centrum (COC)’ (Culture and Relaxation Club). Between 1952 and 1956 Hugo van Win was the treasurer of the COC. The COC made him an honorary menber.
Hugo van Win became a successful businessman, first in textiles, later as a successor to his father in the Liquorice Factory and in the Import Factory 'The Atlas'. He died on the 22nd of May 2004.
Exposition ‘Who can I still trust?’ and the exposition newspaper
Hugo van Win, Een Jew in nazi-Berlin. Utrecht 1997
Poster from the exposition (Source: www.westerbork.nl)
Exposition 'Who can I still trust?'
The exposition 'Who can I still trust? Gay in nazi Germany and the occupied Netherlands' took place from 22 September 2006 in the Resistance Museum in Amsterdam until 14 January 2007. For more information about the exposition and its tour seewww.vertrouwen.nu
Inspiration for The English Patient had gay Nazi lover
The Second World War spy who inspired the womanising hero in the Oscar-winning film The English Patient was actually homosexual and in love with a young soldier, according to letters discovered in Germany.
By Allan Hall in Berlin
10:18PM BST 05 Apr 2010
Intimate correspondence penned by Hungarian-born adventurer Count Laszlo de Almásy shows he had a relationship with a soldier called Hans Entholt.
The Heinrich Barth Institute for African Studies in Cologne has made the claim after discovering love letters but has yet to publish the details.
A member of the institute's staff told Germany's Der Spiegel magazine the letters show he had several homosexual relationships: "Egyptian princes were among Almásy's lovers."
Entholt, an officer in the Wehrmacht, died during Rommel's retreat from Africa after stepping on one of his own side's landmines.
The letters also reveal Almásy did not die of a morphine overdose after suffering terrible burns, the fate the befell the fictional hero played by Ralph Fiennes in the film. Instead Almásy succumbed to amoebic dysentery in 1951.
Born the son of a Hungarian nobleman, Almásy is portrayed in the film as the handsome young lover of an Englishwoman in pre-war Cairo.
During the war he smuggled Nazi agents through the Sahara desert as part of his missions for the Brandenburg Division, a unit of German foreign military intelligence that carried out acts of sabotage behind enemy lines.
Almásy was one of a number of minor pre-war explorers recruited by German intelligence in a bid to diminish British influence across Africa.
Gay Artist Raised in Nazi Germany Arrives Here
August 26, 2005|By Richard Knight Jr., Special to the Tribune
It's a long way from the threat of Nazi Germany's notorious persecution of homosexuals to a solo show celebrating homoerotic imagery in Chicago's Boystown. But such has been the long day's journey into art and personal acceptance of German artist Hans-Ulrich Buchwald. Not only did a Buchwald exhibit of his linoleum prints and woodblocks open at the North Side Leigh Gallery on Aug. 19, but a concurrent show, a retrospective of his entire career, opened at Pilsen's Colby Gallery the following night.
The timing of the two shows was purely coincidental but hardly surprising, say both gallery owners, Jean Leigh and Colby Luckenbill. "I think the more people that see his art the better," Luckenbill said. "What's really important here is this man who's a pioneer of promoting our common humanity through the arts." Leigh said. "I just think his work is very inviting."
he 79-year-old Buchwald, who traveled from his home in Hanover, Germany to attend both openings, was accompanied by his translator and artistic contemporary, Rolf Peter Post, and his daughter, Mariana Buchwald, who lives in Chicago and was responsible in a roundabout way for the multiple exhibitions. She'd befriended both Luckenbill and Leigh and wanted to raise the visibility of her father's work here, though she didn't expect to see two shows dedicated to his work.
"I'm so thrilled for him," Mariana said before the openings. "His work has inspired many people and he does, too, because he's very joyful. He doesn't speak English so it was a bit scary for him coming to America but it's an exciting experience. He said, `Let's go for it.'"
"I am very excited and pleased to have my work shown in Chicago," said Buchwald, who spoke from his studio in Hanover through Post, said. Though Buchwald has shown extensively in his native Germany and Italy, and was part of a 1974 group show in New York, this is his Chicago debut, his first solo exhibition and his first retrospective in America.
Buchwald's professional recognition has been mirrored by the acceptance of his homosexuality from his daughter Mariana. "After my mother passed away in 2000 I encouraged him to come out," she said. "The great exhibition of his work at the Gay Museum in Berlin last year helped him to be confident about his identity."
But this has been a hard won confidence for Buchwald who learned at a young age to remain circumspect about his sexuality. Born in 1925, Buchwald was raised in a family of artists who encouraged all manner of expression. But the suicide of an uncle who was blackmailed with the threat of exposure by a male lover haunted Buchwald.
At age 15 Buchwald enrolled at the Breslau School of Handicrafts to study commercial art, but two years later, in 1942, he was drafted into the German Army. Captured by the Americans in 1945, he had his first homosexual encounter with another prisoner. But though one of his American captors provided him with paints, seeing his artistic talent, Buchwald rejected the officer's advances.
After the war, Buchwald found work as a painter of scenery and mask-maker for a theatrical company and returned to his artistic studies at the Hanover School of Fine Arts. There he met and fell in love with another artist, Hella Feyerabend who was 10 years his senior. They married in 1956 and had two daughters, Mariana and Luise in addition to taking in Feyerabend's daughter from a previous marriage, Gundel.
The two shared an artistic bond that released Buchwald's talent--and rose above his repressed desires for men. "It was continuously there," Mariana Buchwald said. "But the artistic relationship between my parents kept everything going. They were so driven that their muse kept them above everything personal." By the early '60s the couple established a pattern of summer painting trips in Europe and North Africa and joint exhibitions that persisted until Feyerabend's death.
Lesbian Love Story from Nazi Germany
Lesbian Love Story from Nazi Germany / True tale inspires book, documentaries, and feature film There are very few stories about lesbians in Nazi Germany. June 22, 1999| Jeanne Carstensen, SF Gate
We know that lesbians weren't specifically targeted by the Third Reich as were male homosexuals, but Nazi policies made any lifestyle other than heterosexuality extremely difficult.
Erica Fischer's book "Aimée and Jaguar: A Love Story, Berlin 1943" is probably the most well-documented account of lesbian life during this period in existence; it's certainly the most well-known.
Since its publication in Germany in 1994, "Aimée and Jaguar" has been translated into many languages (a Croatian version is in the works) and has been the subject of no fewer than three radio and two television documentaries. It has inspired art exhibits, songs, and a play, and in part, a recent San Francisco symposium on "Lesbians and the Holocaust."
And now the book is a major motion picture. "Aimée and Jaguar," by German director Max Färberbök and starring Germany's Maria Schrader, opened this year's Berlin International Film Festival, the first time a lesbian-themed film has had that honor.
Juliane Kohler as the red-haired Lilly Wust, and Maria Schrader as the dapper Felice Schragenheim, in "Aimée and Jaguar."
In a recent e-mail exchange, Erica Fisher, a Jewish-Austrian journalist living in Berlin, spoke about her initial surprise over the book's tremendous success. "I'm a feminist," she wrote, "so I didn't expect to reach a large audience."
"But I think the combination of love and the holocaust, i.e., love and danger or love and death, is certainly fascinating to people," she continued, "and the fact that it's a true story."
Felice Schragenheim, aka Jaguar, is a vivacious 20-year-old Jewish lesbian trying to survive in wartime Berlin by living under a false identity. A budding journalist, at one point she audaciously takes a job at a Nazi newspaper to gather information for the underground. Despite the constant fear of deportation, she moves about Berlin with unsquelchable chutzpah and lesbian panache.
Lilly Wust, aka Aimée, is a gentile, mother, and the 28-year-old wife of a Nazi officer stationed outside of Berlin. She lives comfortably in an ample apartment, complete with a bronze relief of the Führer, and is supplied with extra rations for herself and her four sons by the regime.
Felice meets Lilly through her lover Inge, who as an unmarried woman is fulfilling her compulsory domestic service as a maid in Lilly's household. Inge is part of a dynamic circle of Berlin lesbians, and Lilly soon finds herself playing hostess to the charming women in her apartment.
Still oblivious to the fact that Felice and most of her friends are Jews, she succumbs to Felice's flirtations and falls madly in love. In letters and poems, they now refer to each other sweetly as Aimée and Jaguar.
Some months into their relationship, after Felice has moved into the Wust household, Lilly finally insists that Felice explain her mysterious comings and goings. As Goebbels is intensifying his program to eliminate all Jews from Berlin, Felice risks all and reveals her identity: She is Felice Schragenheim, not Schrader as Lilly had believed.
Shocked, she pulls herself together and takes Felice into her arms. Although Inge and others had overheard her make anti-Semitic statements in the past, Lilly suddenly understands the danger Felice is in and swears to keep her out of harm's way. Felice's chances to emigrate slowly evaporate. Most of her friends and relatives are either deported, or manage to escape, but Felice stays behind in the dubious safety of Lilly's apartment.
On August 21, 1944, exhausted from months of bombings, the lovers escape Berlin on their bicycles for a day in the countryside. They frolic and kiss and take self-timer photographs with Felice's beloved Leica camera. Upon return to Lilly's apartment, the inevitable climax finally occurs: The Gestapo is waiting, and takes Felice away.
Lilly Wust is now 85 and still living in Berlin. Fischer began interviewing her in 1991 after she won a German award for helping Jews during the Nazi period. For several years, Fischer poured through the letters, poems, journals and photographs that Lilly had carefully saved in two old suitcases; she also interviewed Inge and many others who knew Felice and Lilly at that time, and gained access to some of their documents from the period.
Fischer used this material to create complex, intimate portraits of her main characters. As the love story unfolds, for example, we learn through letters and interviews what Inge and some of her friends thought of Lilly. Their impression that she behaved at times like a convinced Nazi, contrasts sharply with Lilly's portrayal of herself as ignorant of politics.
Fischer also employed this rich source material to capture the humorous side of daily life in Nazi Berlin, as in this conversation from the book between Felice and Inge, who at that time was employed as Lilly's (Elisabeth) maid. Felice, it appears, was an extremely creative flirt:
At the end of October Inge returned home from work furious.
"Damn, she's one of them after all! Do you know what she said to me today? 'Jews? I can smell them!' I can't take it any longer."
"Oh yeah? She said she can smell Jews, did she? I'd like to test that out!"
Felice, her interest already piqued by Inge's colorful descriptions of her work as a housemaid, and craving a change in her monotonous and static life, could not let go of the thought of having Elisabeth Wust take a whiff of her.
"I tried to write my book lightly," Fisher said. "I wanted to convey the permanent danger in which Felice was living, but also the comical sides of her life and the relationship of the two women."
"I suppose this ... was easier for me because I am Jewish and hence without a guilt complex," Fisher continued in her e-mail, "at least not the same one the Germans have!"
Fischer's relationship with Lilly was complex, tense at times. Lilly would talk openly about her lesbian sexuality and life after 1943, when she met Felice, but was not forthcoming about her decade of marriage to her Nazi husband. "Lilly certainly wasn't an ardent anti-semite. She was just one of millions of fellow-travellers and opportunists who went along with the regime because they could not think of anything else," Fischer explained.
There's no doubt that after Lilly learned Felice was Jewish, she changed completely. She helped save three Jewish women in Berlin after Felice was captured, and in her loss came to identify so closely with Jews that she sent her children to synagogue.
But Fischer is adamant about the need for Germans like Lilly to break out of a "disruptive culture of silence." "Instead of coming to terms with their own family histories, many Germans would love to belong to the victim group. This I cannot accept."
Now that the story of "Aimée and Jaguar" has reached an international audience, 85-year-old Lilly Wust has become a bit of a celebrity, especially among German lesbians. At a recent symposium on "Lesbians and the Holocaust" organized by the Holocaust Center of Northern California, the mostly lesbian audience was charmed when Fischer told the story of young lesbians competing for Lilly's attentions. "Some of them are 50 years her junior!" Fisher said with a laugh.
But Fischer sees contradictions in this adulation from the mostly non-Jewish German lesbian community. "They tend to forget the dark side of Lilly's history and only see her love for Felice," Fischer writes. "They see her as Lilly would like to have seen herself. Many prefer to see the two women as equal victims of Nazi Germany."
"But I insist on stressing that Felice died as a Jew and not a lesbian."
The power of Erica Fischer's book and the issues it raises has contributed to the initiation of a specifically lesbian reading of the holocaust, the complexities of which were discussed at the "Lesbians and Holocaust" symposium.
While in San Francisco for the event, Fischer had an extremely moving and unexpected encounter. A holocaust survivor living in the city saw the publicity for the symposium and realized that she had known Felice Schragenheim in the concentration camps. She contacted the Holocaust Center, and a meeting with the journalist was arranged.
"The woman, 82 years old and very beautiful (I won't mention her name because I do not know whether she would want me to), came to fill in a few gaps in my knowledge about Felice's last months," Fischer began. "But mainly, I think, she wanted Lilly to know."
"We knew that Felice was in Gross Rosen, but Gross Rosen consisted of approximately 100 side camps, and we didn't know in which of them Felice was. Now I know: it was a place called Kurzbach."
"Now I also know what Felice was forced to do: She had to carry logs of trees and dig tank traps, but all the women could do was scrape the ground because it was deep winter and very cold."
"I know that Felice was brought to Bergen-Belsen in an open freight train in February 1945. It took them a week to get there -- in an open freight train. And I know that Felice died in Bergen-Belsen, probably shortly before the camp was liberated."
The woman also confirmed some of Fisher's hunches about Felice's nature.
"In Theresienstadt Felice was always the heart and the soul of the party telling funny stories whenever she could, and that she loved surrounding herself with pretty girls, at times sitting on her knees -- which confirms my suspicion that after the war Felice would not have stayed for a long time with clinging Lilly."
Fischer expressed this suspicion in the epilogue to the book: "It made quite a few of my readers furious!"
"The illusion of never-ending love is still very much alive." | http://www.fold3.com/page/285875837_nazi_persecution_of_homosexuals/ | 13 |
19 | Personal Morals vs. Political Moves
Author InfoR.J. Long
Quaker Valley Middle School
201 Graham Street
Sewickley, PA 15143
Type of Lesson
This unit offers great interdisciplinary opportunities between History and English / Language Arts classes at the midde school level. Teachers of both subjects could collaborate to effectively teach reading & writing across the curriculum.
To most Americans, Thomas Jefferson is most well known for writing the Declaration of Independence. In the Declaration, Jefferson’s most famous phrase is: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Although this phrase is admired by Americans and nations around the world, it also becomes the source of much controversy – because Jefferson himself owned 600 slaves throughout his lifetime.
Middle school students must learn to think critically about the complexities of Thomas Jefferson's life and decisions. The best way to accomplish this is to use Jefferson's own words, in his own writings. By utilizing primary sources such as letters, correspondence, & autobiographical sketches, the students will gain a deeper understanding of why the slavery dilemma was not so simple to fix.
- Declaration of Independence: meaning of the document & understanding of the ideals set forth in it
- Thomas Jefferson: his role in writing the Declaration of Independence, and the fact that he lived on a plantation and owned slaves
- Slavery in America, 1775 - 1825: why slavery existed in the U.S., how it impacted U.S. government, society, geographic regions, and the people held as slaves
English Language Arts Standards » History/Social Studies » Grades 6-8
Key Ideas and Details
- RH.6-8.1. Cite specific textual evidence to support analysis of primary and secondary sources.
- RH.6-8.2. Determine the central ideas or information of a primary or secondary source; provide an accurate summary of the source distinct from prior knowledge or opinions.
- RH.6-8.4. Determine the meaning of words and phrases as they are used in a text, including vocabulary specific to domains related to history/social studies.
- RH.6-8.5. Describe how a text presents information (e.g., sequentially, comparatively, causally).
- RH.6-8.6. Identify aspects of a text that reveal an author’s point of view or purpose (e.g., loaded language, inclusion or avoidance of particular facts).
- For students to be able to effectively read, analyze, and interpret primary sources
- For students to create a clear, definitive argument based on what they learn from the primary sources
- For students to write an essay in which they prove their arguments with specific evidence cited from the primary sources.
Day #1: 40 minutes in class, 10 minutes of homework
- Anticipatory Set (10 minutes): Pose these questions to the students: What is a hypocrite? What does it mean to be a hypocrite? Who is someone that you think is a hypocrite? What do hypocrites do? (5 minutes of independent student writing as they brainstorm responses to the questions. After the students have had 5 minutes to write down their responses, ask the students to share their views for 5 minutes of discussion)
- Introduction of Task (10 minutes): Pass out Document Based Essay Packets & discuss with the students the Directions, Historical Background, and Your Task sections of the assessment on the first page of the packet. Depending on various student levels of knowledge, you may need to answer any questions that the students may have before you move to the next activity.
- Group Analysis of Document #1 (20 minutes):
- Have a student read Document #1 out loud & everyone in class follows along
- After the document has been read, the teacher should guide the students through the process of responding to the analysis questions. My advice is to go through this process as a group and model strategies to effectively answer the questions (go back into the document, highlight specific words & phrases, put the phrases of the document into the students own words), line by line and word by word if necessary. Model for the group of students how to conduct a very close reading & analysis of the primary source text.
- Homework: Students should read Document #2 and answer the two analysis questions for Document #2. Students should bring them to class tomorrow completed & should be prepared to share their responses and/or ask any questions that they may have.
Day #2: 40 minutes in class, 10 minutes of homework
- Anticipatory Set (10 minutes): Check to see that all students have completed the analysis questions for Document #2. Briefly discuss the responses & thoughts that the students had regarding Document #2. Answer any questions/points of confusion students may have had about Document #2.
- Independent Analysis of Documents #3, #4, & #5 (30 minutes): Students should work independently to read & answer the analysis questions for Documents #3, #4, & #5. Teacher should circulate around the room checking to see if students have any questions, and that the students are using the strategies that were modeled for them the day before when the group analyzed Document #1 and reinforce those strategies if necessary.
- Homework: Students should read Document #6 and answer the three analysis questions for Document #6. Students should bring them to class tomorrow completed & be prepared to share thei responses and/or ask any questions that they may have on any of the documents (specifically Documents #3, #4, #5, & #6).
Day #3: 40 minutes in class, 10 minutes of homework
- Anticipatory Set (10 minutes): Check to see that all students have completed the analysis questions for Document #6. Briefly discuss the responses & thoughts that the students had regarding Document #6 or any of the documents. Ask students to share their responses to some of the analysis questions to create an open dialogue about differences & similarities that may exist among students in the class based on their interpretation and analysis of the documents.
- Independent Analysis of Documents #7, #8, & #9 (30 minutes): Students should work independently to read & answer the analysis questions for Documents #7, #8, & #9. Teacher should circulate around the room checking to see if students have any questions, and that the students are using the strategies that were modeled for them.
- Homework: Students should read Document #10 and answer the three analysis questions for Document #10. Students should bring them to class tomorrow completed & be prepared to share their responses and/or ask any questions that they may have on any of the documents.
Day #4: 40 minutes in class, 15-20 minutes of homework
- Anticipatory Set (10 minutes): Check to see that all students have completed the analysis questions for all 10 of the documents. Briefly discuss the responses & thoughts that the students had regarding Document #6 or any of the documents. Ask students to share their responses to some of the analysis questions to create an open dialogue about differences & similarities that may exist among students in the class.
- Document Based Essay Setup & Checklist (15 minutes): Guide the students to page 2 of the Document Based Essay Packet. Thoroughly and clearly discuss each section of the Setup and Checklist. Show the students examples of effective introductory paragraphs, body paragraphs, and concluding paragraphs (Scroll down to "Additional Materials" for sample introductory, body, and conclusion paragraphs)
- Choosing 6 Documents (10 minutes): Students must choose 6 documents that they are going to use to write the essay. Instruct the students to look at the last analysis question for each document. Have the students tally up the number of "Yes, Jefferson sounds like a hypocrite in this document and "No, Jefferson does not sound like a hypocrite in this document." If the students have more "Yes's" than "No's," then the students should choose 6 of the "Yes" documents, and vice versa. THE STUDENTS SHOULD CHOOSE THEIR 6 DOCUMENTS WISELY. THESE WILL BE THE BASIS FOR THE ARGUMENT THEY WILL CREATE.
- Creating a Clear Argument (5 minutes): Students should spend the remainder of the class period to create their clear argument that states what they intend to argue about whether or not Jefferson was a hypocrite. THE ARGUMENT THE STUDENTS MAKE SHOULD BE BASED DIRECTLY ON THE 6 DOCUMENTS THEY CHOSE TO USE. IF THE DOCUMENTS SAID "No, Jefferson is not a hypocrite," then that is the basis for the argument and vice versa. The teacher should circulate around the room and check in with each student on his/her individual progress. If the students are having difficulty coming up with a way to phrase their argument, guide them to Page 2 in the Document Based Question Packet. A sample argument is there for them to use as a guide. Encourage the students to be as specific as possible with their argument and to ask questions if they are not sure how to proceed.
- Homework: Students should finish the Introduction Paragraph by focusing on the three major components of an effective introduction: Attention Grabber, Background information about the time period, and your clear definitive argument. Students should bring the completed Introduction Paragraph to class tomorrow.
Day #5: 40 minutes in class, 30 minutes of homework
- Group Check & Share (10 minutes): Have students volunteer to read & share their introductory paragraphs with the class. Check to see that each student has completed the introduction & answer any questions that the students may have regarding any component of the introduction paragraph.
- Body Paragraph & Conclusion Writing (30 minutes): Students have the remainder of the class to work on constructing their 3 body paragraphs and conclusion paragraph. Refer the students to the Setup & Checklist on Page 2 of the Document Based Essay Packet. Encourage students to ask any questions and receive help & feedback during class, so that when they go home for the weekend, they fully understand everything that they must have completed for Monday.
- Homework: Students should finish the essay for homework over the weekend. Completed essay is due at the beginning of class on Monday.
Handouts and Downloads
- U.S. History Document Based Essay Rubric
- Was Thomas Jefferson a Hypocrite? Document Based Essay Packet
Sample Introductory Paragraph (from a Civil War Document Based Essay):
“Let us strive on to finish the work we are in; to bind up the nation’s wounds...” (Attention Grabber) These are the words spoken by Abraham Lincoln at his Second Inaugural Address on March 4, 1865. The Civil war was still raging on in the South, and Lincoln had a plan to bring the country back together after the war. Unfortunately, the President was fatally shot just days after the war ended, and his plan was never executed. This meant that the nation would never fully recover from the Civil War, and the country still feel its effects today (Background Information about the time period). The Civil War changed the United States by changing the political views of the country, introducing new technology and tactics into the way wars were fought, and changing the lives of African Americans. (Clear Argument)
Sample Body Paragraph (from a Civil War Document Based Essay):
The lives of African-Americans living in the South were changed by the Civil War. (Clear Main Point) Afterwards, their lives changed dramatically for the better. In Document 3, the 13th, 14th, and 15th Amendments were passed (Cited Proof). They gave blacks the right to Freedom, the right to Citizenship, and the right to not be judged by their race. This changed the United States by abolishing Slavery and overruling the Dred Scott Decision that stated that blacks were property (Analysis relating to argument). The lives of blacks were changed in a negative way when post-Civil War group, the Ku Klux Klan was founded. Document 5 states that this group threatened and even killed blacks and whites that supported black rights. This changed the lives of Americans living in the South because they no longer felt safe in their own homes.
Sample Conclusion Paragraph (from a Civil War Document Based Essay):
Lincoln never wanted the United States to still feel the negative effects of the Civil War, but the war didn’t just change the country for the worse. There are still many positive effects that have come as a result of it. For example, slavery has been abolished and the Union was saved. The country today is a very different country today than it was in 1865 - in a much better way for the people living in the United States. (Re-stating the points from the body paragraphs). Therefore, the United States was changed politically, technologically, and socially because of the Civil War (Rewording of the Argument)
Click the link below to download the "Document Based Essay Packet"
Accommodations - Students with Special Needs
For student that need reduced and/or simplified modications in this unit, teachers can make the following accomodations:
- Reduce the number of documents that the students must use in their document based essay. Students should at least try to use 3 documents (half of the required amount).
- If students use 3 documents, they can attempt to write one body paragraph about each document. This allows the students to have a more concrete understanding of how each document can be used to support their argument.
- I would recommend that documents #3, #4, and #6 be used for students with special education needs. These three documents represent a well rounded perspective on the issue, but are straight forward enough for struggling students to understand what Jefferson is saying in each of these documents.
Accommodations - Advanced Learners
For students that need enrichment and/or more of a challenge in this unit, teachers can make the following accomodations:
- Increase the number of documents that the students must use in their document based essay. This will require students to effectively integrate more documents & information in their essay to support their argument, which is a skill that wil be necessary to practice for success in AP history courses in high school.
- This Document Based essay fits the Pre-AP model of teaching students skills that are necessary to be successful in AP classes in high school. | http://classroom.monticello.org/teachers/lessons/lesson/400/Personal-Morals-vs-Political-Moves/ | 13 |
17 | Parliamentary Lesson Plans
Representation: Majority rule
Lesson plan target
- Students: Middle to upper secondary
- Level: Less challenging
Most decisions in parliament are based upon the principle of majority rule—the rule that requires more than half of the members who cast a vote to agree in order for the entire group to make a decision on the measure being voted on.
In this lesson students explore various forms of decision making including majority rule, executive, consensus and autocracy (as well as exploring the power of veto), when they debate a bill in a class parliament.
- understand why majority rule is used in parliament
- participate in a role-play in which a bill is used to explore four different methods of decision making
- understand the terms consensus, majority rule, veto and autocracy.
- Why do many governing bodies use majority rule to make decisions? (to ensure support for the decision)
- Why do some decision making bodies seek consensus? (to hear all opinions and to maintain relations)
- Why might autocracy succeed? (to limit power to an individual or small group)
- Majority rule
- Two-party system
- Body politic
- Start the discussion by eliciting different forms of decision making. (majority rule, consensus, autocracy, power of veto etc.)
- Brainstorm several examples of each. (majority rule may be used by sporting clubs, consensus by some families and classrooms, autocracy by some countries and many private businesses and veto by the American President and some families and classrooms etc.)
- What form of decision making does the federal Parliament use? (majority rule)
- Tell students that they will debate a bill to introduce military conscription in Australia and use four decision making systems to decide the issue.
- Define military conscription and discuss related issues.
- Arrange classroom chairs in a circle and appoint a chairperson to manage the debate.
- Have the chairperson select speakers by asking students who wish to speak to stand.
- Conclude the debate, and lead the following quick decision making scenarios:
- What form of decision making was most efficient?
- Was it difficult to reach a consensus? Why? What difference would 60 students or 120 students make to this method?
- What form of decision making was most fair? Least fair?
- What justification might the vetoer or autocrat have for their roles?
- Is there a best decision making system? Why? Why not?
- How is government formed in the Australian Parliament? (the party or coalition of parties with the support of the majority in the House of Representatives becomes government)
- Why does the Parliament use majority rule rather than consensus for passing laws?
- Recently in the House of Representatives a minority government was formed. How did this occur? How does a minority government change decision making?
- Is it acceptable that, on occasion, nearly half the representatives in a given assembly oppose laws that are passed?
- Are there grounds for enforcing a consensus or perhaps a 2/3 majority in parliament? What decisions might warrant this?
- Are there national issues that should be decided by consensus? Are there issues that should be decided by a smaller group?
- Majority rule tends to lead to a two-party system. Discuss in 300 words the advantages and disadvantages of this method of forming government and opposition.
- In 300 words, argue for greater consensual decision making in Australia.
|Decision method||Scenario action||Result|
|Autocratic decision||Randomly select an autocrat to decide.|
|Executive decision||Determine a small executive (two or three people) to decide.|
|Majority decision||Open ballot: Conduct a vote with a show of hands. The majority decides.
Secret ballot: Conduct a vote by writing on a piece of paper. Then have someone count the votes and declare the outcome.
|Consensus decision||Elicit compromise positions until the most favourable is determined. Consensus decides.|
|Veto decision||Give yourself (as the teacher) the power to veto the consensus decision!|
Autocracy: a system of government where one person, the autocrat, has complete power.
Veto power: a person or group having power to turn down a proposal or make executive decisions one way or another.
Consensus: general agreement among the members of a given group or community, each of which exercises some discretion in decision making and follow-up action.
Executive: a branch of government or local authority.
Compromise: a mutual promise to abide by a decision.
Decision: a resolution, making up one's mind.
Documents and resources
PEO Fact Sheets: | http://www.peo.gov.au/teachers/parliamentary-lesson-plans/majority-rule.html | 13 |
15 | Chapter 1: Algebra and Geometry Review
Algebra is a part of mathematics which solves problems by representing quantities by symbols (often called variables), expressing the relationships between the quantities as equations or inequalities, and manipulating these expressions according to well defined rules in order to find additional properties of the quantities and solve the problem.
This course course assumes that you have already had considerable experience in using algebra, and that the material in this chapter is, for the most part, simply review. The remainder of this section will deal with rules for manipulating algebraic expressions, for solving linear and quadratic equations in one unknown, for solving systems of equations in more than one unknown, and for solving inequalities.
We begin with a review of the basic rules for simplifying algebraic expressions. You probably already know hundreds of these, and the purpose here is point out those that we think are the most important. This will reduce the number of such rules down to a few dozen; in subsequent chapters, we will see that even this short list can be pared down to a few fundamentals from which all the rest follow.
Some really fundamental rules are:
The following examples show how these three fundamental rules can be combined to simplify more complicated expressions:
Example 1: You can distribute over longer sums by applying the distributive law multiple times:
Example 2: One can multiply out products of arbitrary sums using the Example 1:
Operator Precedence: In the above expressions we have been omitting parentheses. For example, we write ab + ac which might be interpreted as a(b + a)c, but we know that the intention is (ab) + (bd). We know this because of the following precedence rules:
Note: Depending on the context, multiple exponentiations without parentheses might be interpreted as being grouped left to right, righjt to left, or as simply being syntactically incorrect. The best practice is to always use an explicit parenthesization to avoid any possible misinterpretation.
Example 3: Be careful to evaluate operators of equal precedence from left to right. For example, and not 3/8.
Negatives: These are the main properties of the negation operator (unary minus):
Fractions: These are the main properties of fractions. In all the formulas, one assumes that the quantities in the denominators are all non-zero.
Powers: One can raise arbitrary real numbers to integer powers. One defines and, by induction, . For negative integers , one defines . For non-negative real numbers and positive integers , one can define the root to be unique real number such that . This allows one to define rational powers of non-negative numbers by . The following properties are true for these rational powers:
The last section was concerned with the problem:
Given the values of some quantities, how do you calculate expressions involving those quantities?Most of the course will be concerned with the inverse problem:
Given the value of some expressions involving quantities, how do you find the values of the quantities.
The two principal arithmetic operations are addition and multiplication. Here is a problem of the second type:
Problem 1: Find two numbers given the values of their sum and product.
The solution of this problem was already known to the Babylonians, and is one of the most important algebra problems known to them. How can we solve it?
First let use represent the two numbers by the letters x and y and represent their sum and product by the letters a and b. The problem can then be expressed as: Given a and b, find x and y such that
Now there are lots of pairs x and y such that x + y = a. One possibility is to take two equal numbers, this would give x = y and our equation becomes 2x = 2y = a. So, x = y = a/2. Now, if , we would have our solution.
On the other hand, if this weren't the case, we would not have the solution. This would be the case where the two numbers x and y are not equal. We can think of this is x is different from a/2 and so x = a/2 + t for some number t. Since we still want x + y = a, increasing x by t means that we have to decrease y by the same amount, i.e. y = a/2 - t. Now, let's try this as the solution by putting it in the second equation:
Although it was not clear before how to choose the exact value for t, this last equality tells us that and so . We know the square of the number t we need, and so . But then and . Another solution is obtained by letting t be its second possible value, this simply switches the values of x and y.
Problem 1 is the most important algebra problem solved in antiquity. The approach we have taken is that of Diophantus of Alexandria. The presentation was very quick; so let us go back and comment on a number of important points:
Solving systems of equations was the topic of the last section. Let's concentrate in this section on the special case in which there is only one equation and only one variable. The next section will return to the more general case.
There are two basic ideas in working with equations:
Proposition 1: For all real numbers , , and :
Example 4: To solve the general linear equation ax + b = c, we apply the first principle. Assuming that both sides are equal, we can add -b to both sides to get (ax + b) + (-b) = c + (-b) or ax = c - b. If a is non-zero, then one can then multiply both sides by to obtain , which simplifies to .
Doing the same operations with numbers, one can start with 2x + 3 = 5. Assuming that both sides are equal, we can add -3 to both sides to get (2x + 3) + (-3) = 5 + (-3) or 2x = 5 - 3. Since the coefficient 2 is non-zero, one can multiply both sides by to obtain , or . Now, substitute 1 for x in the original equation to check to see that it is indeed a solution.
If you want to solve: for x, then, assuming that both sides of the equations are equal, one can multiply them by to get
So, the solution is x = 1. Again, one should substitute this back into the original equation to check that it is indeed a solution.
In each example, we started out by assuming that we had a solution, solved to find out what the solution must have been, and then checked to see that the value actually worked. This last step is NOT just a check for errors in algebra, but is a NECESSARY step in the procedure. For example, suppose you want to solve: . Assuming that you have a solution, we can proceed exactly as in the last example:
So, the solution can only be x = 2. But, when you try to substitute this value back into the original equation, you see that the denominator is zero. So, x = 2 is NOT a solution. This means that there are NO solutions to the original equation.
Example 5: Suppose you want to solve . Assume that x is a solution to the equation. The left side can be factored to get . Using the second part of Proposition 1, we know that either x - 2 = 0 or x - 4 = 0. So, x = 2 or x = 4. Substituting each of these values back into the original equation verifies that each of two possibilities is indeed a solution of the original equation.
Quadratic Formula: Suppose we want to solve the general quadratic equation: .
If a = 0, then the equation is bx + c = 0. This is a simple linear equation. If b is not zero, then it has a single solution x = -c/b. If b = 0 and c is not zero, then there are no solutions. Finally, if both b and c are zero, then every real number x satisfies the equation.
If a is not zero, then we can divide through by a to put the equation in the form . This is of the form of Problem 2 of the last section. The solutions obtained there were:
(provided that all the operations make sense, i.e. a is non-zero and ).
Completing the square: This is another approach to solving a quadratic equation, and it is a method we will use in many other problems as well.
Let us assume that we wish to solve the equation . If we could rewrite the equation in the form , then it would be easy to solve for x. This can be accomplished by the operation of completing the square. To see how to do it, just expand out this last expression to get . If this is to be the same as , then corresponding coefficients in the two equations must be equal, i.e. we must have:
But this system of equation is easy to solve for d and e. We have and .
Let's rework our last example using this method:
Example 5: Solve .
We want to express this in the form . To do this, we take d = a/2 = -6/2 = -3. So, we would get . Rather than trying to remember the formula for e, we can just multiply out the square to get . Comparing the constant terms, we need 9 + e = 8 and so e = -1, i.e. . We can now solve for x, we get and so . So and so the solutions are x = 4 or x = 2. It is easy to verify that these are both solutions of the original equation.
When there is more than one variable, one often has more than one equation which the values of the variables are required to simultaneously satisfy. For example, in completing the square, we needed to find all solutions of the system of equations: and where the variables were d and e, and a, b, and c were constants. We were not interested in pairs of numbers (d, e) which satisfied just one of the two equations; they needed to satisfy both equations.
Another example is the general system of 2 linear equations in two unknowns x and y: ax + by = e, cx + dy = f. Now, in the special case of the system in the last paragraph, it was easy to solve for the variables because one of the equations involved only one of the variables. We could use it to solve for that variable. Having its value, we could substitute its value into the other equation and obtain an equation involving only the second variable; this equation was then solved and we had all the possible solutions of the system.
What makes the general system of 2 linear equations look more difficult is that both equations involve both variables. There are two approaches:
Example 6: Consider the system: 2x + 3y = 5, 4x - 7y = -3. Assume that x and y are satisfy both equations. One can proceed using either method:
Example 7: Find all the solutions of the system of equations: , . Assume that one has a solution x, y. Solving the second equation for y, one gets a value y = 1 -x which when substituted into the firs equation gives: or . Collecting terms, we get a quadratic . Factoring and using the second part of Proposition 1, gives x = 0 or x = 1. Substituting these values into our expression for y, gives two possible solutions (x,y) = (0, 1) and (x,y) = (1, 0). Substituting each of these into the original equations, verifies that both of these pairs are solutions of the original system of equations.
If we have more than two variables and more equations, we can apply the same basic strategies. For example, if you have three linear equations in three unknowns, you can use one of them to solve for one variable in terms of the other two. Substituting this expression into the two remaining equations gives two equations in two unknowns. This system can be solved by the method we just described. Then the solutions can be substituted back into the expression for the first variable to find all possible solutions. When you have these, substitute each triple of numbers into the original equations to see which of the possibilities are really solutions. One can also use the second approach as is illustrated by the next example.
Example 8: Solve the system of equations: x + y + z = 0, x + 2y + 2z = 2, x - 2y + 2z = 4. Assume that (x, y, z) is a solution. Subtracting the first equation from each of the other two equations gives y + z = 2, -3y + z = 4. Now subtracting the first of these from the second gives -4y = 2 or y = -1/2. Substituting this into the y + z = 2 gives z = 5/2. Finally, substituting these into the first of the original equations gives x = -2. So the only possible solution is (x, y, z) = (-2, -1/2, 5/2). Substituting these values into the original equations shows that this possible solution is, in fact, a solution of the original system.
In addition to the four basic algebraic operations on real numbers, there is also an order relation. The basic properties are:
Proposition 2: Let a, b, and c be real numbers.
Just as Proposition 1 allowed one to operate with equations, Proposition 2 allows one to work with inequalities. The main difference is that multiplication tends to complicate things as there are two cases depending on whether the multiplier is positive or negative.
Example 9: Solve the general linear inequality ax + b < 0. Using Proposition 2, this implies ax < - b. If a > 0, Proposition 2 gives x < -b/a. On the other hand, if a < 0, then x > -b/a. Finally, if a = 0, then there are no possible solutions if or every real number is a solution if and .
We still need to check that our possible solutions are indeed solutions. This is somewhat more difficult because we do not simply have a small number of values of substitute:
The absolute value |r| of a real number r is defined to be r or -r depending on whether or not r is non-negative or not. For example, |2| = 2, but |-2| = -(-2) = 2.
Example 10: Find all solutions of |2x - 3| > 5. There are two cases:
Geometry is the mathematical study of the relationships between collections of points, curves, angles, surfaces and solid objects including the measurement of these objects and the distances between them. This review will discuss the correspondence between the real numbers and the points of a line, the general idea of analytic geometry, as well as the equations of lines, circles, and parabolas as well as the measurement of distance between points.
The most important result of classical synthetic geometry is:
Theorem 1: If is a right triangle with hypotenuse of length c and legs of length a and b, then .
To understand its proof, we need to know
Proposition 3: The sum of the angles of any triangle is 180 degrees.
Proof: Let be an arbitrary triangle. Construct a line DE parallel to the base BC and through the third vertex A as shown in the diagram below.
Since BA is transversal to the pair of parallel lines BC and DE, one has . Similarly, CA is transversal to the pair of parallel lines BC and DE. So, . Since DAE is a straight line, we have . From the diagram, we see that
as was to be shown.
The proof of the Pythagorean Theorem will also require some simple facts about the areas of squares and triangles. For future reference, let's state these and other results as
Proposition 4:(Areas and Perimeters)
Now it is easy to see why the Pythagorean Theorem is true. Start with an arbitrary right triangle where C is the right angle and the opposite side (the hypotenuse) is of length c. Let a and b be the lengths of the legs which are opposite to the angles A and B respectively.
Corollary 1: i. The length of the diagonal of a rectangle with sides of lengths a and b is precisely .
ii. Let T be a triangle with sides of length a, b, and c opposite angles A, B, and C respectively. Then T is a right triangle with right angle C if .
Proof: i. The first assertion is an obvious consequence of Theorem 1.
ii. Let T' be a triangle where angle C' is a right angle and the sides opposite angles A' and B' are of length a and b respectively. By Theorem 1, the length of the side opposite angle C' is of length . Then corresponding sides of the triangles T and T' are of equal length; so the two triangles are congruent. But then angle C must be equal to angle C' and so angle C is a right angle.
We will need to know about similar triangles. Two triangles and are defined to be similar if , , and . By Proposition 3, if two of these equations hold, then so does the third.
Proposition 5: If the two triangles and are similar, then the ratio of the lengths of a pair of sides of one triangle is equal to the ratio of the lengths of the corresponding pair of sides of the other triangle, e.g. .
The rough idea of analytic geometry is to model the plane as the set of all pairs of real numbers. A curve is then the set of solutions of some algebraic equation. In order to solve a geometric problem, one first translates it into an algebra problem about the sets of algebraic equations. Then one uses algebra to solve this problem, and finally translates the answer back into geometric terms.
In the case of a line, one can model it as the set of real numbers.
To handle the plane, start with two perpendicular lines in the plane. Their point of intersection is denoted 0. The first line is called the x-axis and the second line is called the y-axis. Using the same unit distance as before, one can map the real numbers onto each of the two lines. With any pair (a, b) of real numbers, we can associate a point in the plane: From the point marked a on the x-axis, erect a line perpendicular to the x-axis. Similarly, from the point marked b on the y-axis, erect a line perpendicular to the y-axis. The point of intersection of these two lines is the point associated with (a, b).
Assumption 1: The above mapping is a 1-1 and onto mapping between the set of all pairs (a,b) of real numbers and the points of the plane.
Because of this assumption, we will usually not distinguish between the ordered pair (a, b) and its corresponding point in the plane. In particular, we will refer to the point as being the ordered pair (a,b).
Remark: By using three pairwise perpendicular lines intersecting at a point 0, one can name points in three dimensional space as simply triples (a, b, c) of real numbers. Of course, one could then define n-dimensional space, as simply being the set of ordered n-tuples of numbers.
Definition 1: The distance between the points P = (a,b) and Q = (c, d) is defined to be . This is called the distance formula.
Although the formula is a bit complicated, this is the obvious definition as you can see by examining the diagram below in which PQ is simply the hypotenuse of a right triangle whose legs are of length |c - a| and |d - b| respectively. In light of the Pythagorean Theorem the formula for the length of this hypotenuse is the value given in the definition.
Example 11: The points on either axis labeled a and b are of distance |b - a| from each other -- where distance is calculated by Definition 1.
Example 12: The distance between the points (3, 5) and (2,7) is .
Translation of Graphs If the point (x,y) satisfies the equation f(x, y) = 0, and if A and B are real numbers, then the point (x + A, y + B) satisfies the equation f(x - A, y - B) = 0 (because f((x+A) - A, (y+B) - B) = f(x, y) = 0). Another way of saying this is: if the graph of a function y = f(x) is moved A units to the right and B units to the right, then one obtains the graph of y - B = f(x - A). This amounts to the simple rules:
Graph Compression and Expansion If the point (x,y) satisfies the equation f(x, y) = 0, and if A and B are non-zero real numbers, then the point (Ax, By) satisifes the equation f(x/A, y/B) = 0 (because f((Ax)/A, (By)/B) = f(x, y) = 0). Another way of saying this is: if the graph of a function y = f(x) is expanded by a factor of A horizontally and by a factor of B vertically, then the result is the graph of y = Bf(x/A). Again, you can give simple rules:
Reflecting a graph across an axis If the point (x, y) satisfies the equation f(x, y) = 0, then the point (-x, y) obtained by reflecting the point (x, y) across the y-axis satisfies f(-x, y) = 0. Similarly, the point (x, -y) obtained by reflecting the point (x, y) across the x-axis satsifes the equation f(x, -y) = 0.
Lines, circles, and other curves like parabolas should simply be the sets of solutions of certain algebraic equations. This appears to be the case. For example, we can define a vertical line to be the set of solutions of some equation x = a, where a is a constant. Similarly, a horizontal line is the set of solutions of an equation of the form for some constant b.
Definition 2: A line is either a horizontal or vertical line or else it is the set of solutions of an equation of the form y = mx + b where m and b are constants.
The equation y = mx + b (or x = a) is called the equation of the line which is the set of solutions of the equation. Note that this is a 1-1 correspondence between the set of lines and the set of equations of lines. In particular, different equations correspond to different lines; in fact, you can easily convince yourself that through any two points there is precisely one line. The constant m is called the slope of the line and the constant b is called the y-intercept of the line. Note that the y-intercept is the y-coordinate of the point where the line intersects the y-axis. The slope and y-intercept of a vertical line is not defined. There are many algebraically equivalent forms of the equation of a line; the particular one y = mx + b is called the slope-intercept form of the equation of the line.
If for i = 1, 2 are two distinct points on the line y = mx + b, then one has for i = 1, 2 and, taking differences, one gets
Solving for m gives:
Proposition 6: The slope of the non-vertical line through and is given by .
Example 13: The equation of the line through the two points and is given by
This is the so-called 2 point form of the equation of the line. It is proved by substituting in the two points.
Example 14: The equation of the line with slope m which passes through the point is . This is called the point-slope form of the equation of the line.
Example 15: Consider the line through the two points (2, 3) and (5,7). Its slope is . Its equation in point-slope form is (or equivalently ). To find the y-intercept, just substitute x = 0 into either of these equations and solve for y; the y-intercept is 1/3. So the slope-intercept form of the equation of the line is y = (4/3)x + 1/3. To find points on this line, just substitute in arbitrarily chosen values of x and solve for the y-coordinate.
Two lines which have no point of intersection are said to be parallel.
Example 16: Two distinct lines are parallel if and only if they are either both vertical or else they both have the same slope.
Suppose the two lines are not vertical. Let and be their equations and let's assume (x, y) is a point of intersection. Then it satisfies both equations. Equating them, we get and so . If the slopes were unequal, the coefficient of x would be non-zero and we could solve for and substitute this back into either of the original equations to obtain the value for y. If the slopes are not equal, it is easy to then verify that this pair is indeed a point of intersection, i.e. satisfies both equations. On the other hand, if the slopes were equal, then we would have which would mean that the lines were not distinct (because they have the same equation).
In case the two lines are vertical, the equations are of the form and where . Clearly, no pair (x, y) could satisfy both of these equations; so distinct vertical lines are parallel.
Finally, if one line has the equation and the other the equation , then the point (a, ma + b) is a point of intersection. This completes the proof of the assertion.
We say that two lines are perpendicular if they intersect at right angles to each other.
Example 17: Two lines are pendendicular if and only if one of the following is true:
It is easy to verify the result in case either of the two lines is vertical, either of the two lines is horizontal, or if the lines are are parallel or coincident. So, assume that none of these conditions are true. Then the two lines have slopes and their equations can be written for i = 1, 2 where is the point of intersection of the two lines. The lines have points A and B with x-coordinates, say, a + 1. Using the equations of the lines, we see that the points are and . By the Pythagorean Theorem, the lines are perpendicular if and only if . Using the distance formula, this amounts to
If you multiply out and simplify this expression, you will see that it is equivalent to .
Remark: The last example shows both the power and danger of analytic geometry. There was no understanding involved, the result came from brute algebraic computation.
A circle with center (a, b) and radius r is the set of points (x,y) whose distance from (a, b) is precisely r. By the distance formula, this circle is the set of solutions of
The set of solutions of an equation of the form where , b, and c are constants is called a parabola. The following properties are easy to verify:
Example 18: Find the equation of the line which passes through the two points of intersection of the circle centered at the origin of radius 1 and the circle centered at (1,1) with radius 1.
The equations of the two circles are and . If (x, y) is a point of intersection, it must satisfy both equations. Expanding out the second equation, one gets . Subtracting the first equation and simplifying gives the equation x + y = 1. If we knew that there were two points of intersection, then they would both satisfy this equation and it is the equation of a line; so it must be the desired line. (In order to verify that there are indeed two points of intersection, finish solving the system of equations and verify that the two solutions satisfy the original equations.)
Let L be a non-vertical line through the origin. By rotating the line L around the y-axis, one obtains a pair of infinite cones with vertex at the origin. By intersecting this pair of cones with various planes, one obtains curves called ellipses, parabolas, and hyperbolas as well as some degenerate cases such as a single point or a single line. These intersections were called sections and so ellipses, parabolas, and hyperbolas were referred to as conic sections, i.e. sections of a cone.
Nowadays the importance of conic sections is not that they arise by intersecting a cone with a plane, but rather that they can be used to categorize all curves whose equations are polynomials of degree 2. So, lines are the curves represented by equations which are polynomials of degree 1, and conic sections are the curves represented by equations which are polynomials of degree 2.
In this section, we will define ellipses, parabolas, and hyperbolas without reference to sections of cones, and obtain equations for each in some special cases.
We have already defined parabolas to be the graphs of functions of the form . Let us give a more traditional definition:
Definition 3: Let F be a point and D be a line in the plane which does not contain F. Then the parabola with focus F and directrix D is the set of points P = (x, y) such that the distance from P to F is the same as the distance from P to D. (By the distance from P to the line D, we mean the shortest distance from P to any point of D, i.e. the length of the line segment obtained by dropping a line through P perpendicular to D.)
Now, let's look at a special case in which the vertex F = (0, d) and the line D is y = -d. If P = (x,y) is on the parabola with focus F and directrix D, then the distance formula tells us that
When you multiply this out and collect terms, one sees that this is just or .
Given the parabola, , we see that a = 1/(4d) and so the focus is at (0, d) = (0, 1/(4a)) and the directrix is y = -d = -1/(4a). Where is the focus and directrix of ?
One could define ellipses and hyperbolas in a manner exactly analogous to Definition 3, viz.
Definition 3a: Let F be a point and D be a line in the plane which does not contain F and e be a positive real number. Then the conic section with focus F, directrix D, and eccentricity e is the set of points P = (x, y) such that the distance from P to F is equal to e times the distance from P to D. If e = 1, the conic section is called a parabola. The conic section is called an ellipse if e < 1 and a hyperbola if e > 1.
Such a definition leads to rather complicated formulas, and so, instead, we will define
Definition 4: Let and be two not necessarily distinct points in the plane and let a be a positive real number. Then the ellipse with foci and and semimajor axis a is the set of points P = (x, y) such that the sum of the distances from P to the foci is exactly 2a.
Clearly, if a is less than half the distance between the foci, then the ellipse with semimajor axis a is the empty set.
Again, let us consider a special case: Let and where is a real number. Let a be a real number greater than c. Then if P = (x, y) is a point of the ellipse with these points as foci and with semimajor axis a, then we have by the distance formula:
This is the equation of the ellipse, but it is better to simplify it a bit. To do so, subtract the second square root from both sides and then square both sides to get:
Moving all the terms except for the square root from the right side to the left side and simplifying gives
Dividing both sides by 4 and squaring again gives
which can be re-arranged to get
Finally, since a is larger than c and both are positive, we can let b be a positive number such that . Substituting this into our last formula and dividing both sides by gives us the standard form of the equation of the ellipse:
The quantity e = c/a is called the eccentricity of the ellipse. The quantity b is called the semiminor axis of the ellipse. In particular, if the eccentricity of the ellipse is 0, then the two foci coincide, the semimajor and semiminor axes are equal, and the ellipse is a circle whose radius is precisely the semimajor axis.
Of course, one can also have ellipses with a vertical semimajor axis. If we simply interchange the roles of x and y in the above calculation, the final formula is the same, except that quantities a and b are interchanged. You distinguish the two cases by looking to see which of the two is the larger. You can also translate the graph of the equation to obtain ellipses centered at points (a, b) instead of at the origin (0, 0). As usual, the formulas look the same except that x is replaced with x - a and y is replace by y - b. Expansion and compression either horizontally or vertically just expands or contracts the values of a and b.
In analogy with Definition 4, we have
Definition 5: Let and be two not necessarily distinct points in the plane and let a be a positive real number. Then the ellipse with foci and and semimajor axis a is the set of points P = (x, y) such that the difference of the distances from P to the foci is exactly 2a. (Note that we have to subtract the smaller of two distances from the larger one in order to get 2a.)
Taking the foci (0, -c) and (0, c) as before, it is easy to use the distance formula to write down the equation of the hyperbola. You should go through steps just as with the ellipse. When you are done, you will see that the equation of the hyperbola reduces down to
where b is a positive number such that . As before, we call the eccentricity of the hyperbola. Note that e > 1 for hyperbolas and less than 1 for ellipses.
Angle Measurement: Angles are measured in radians. Start with the unit circle centered at the origin (0, 0). Make one side of the angle the positive x-axis. The other side of the angle is another ray from the origin, say 0A in the diagram below. The magnitude of the radian measure of the angle is equal to the length of the arc of the circle swept out as the positive x-axis is rotated through the angle to the ray 0A. The radian measure is positive if the arc is swept out in a counterclockwise direction and is negative otherwise. If we take instead of the unit circle, the circle centered at the origin of radius r, then the arc swept out by the same action is of length where is the radian measure of the angle.
Again referring to the diagram above, if A is the point on the unit circle corresponding to the second ray of the angle, then its coordinates are by definition the cosine and sine of the angle , i.e. . Since A lies on the unit circle, we have the identity:
. Further, it is clear by the symmetry of the circle that we have:
If B is the point on the ray 0A where the ray intersects the circle of radius r: , then is equal to the ratio of the x-coordinate of B and the the hypotenuse r (because of similar triangles). This explains the common definition of the cosine as being the ratio of the adjacent side to the hypotenuse; when using this, be careful, that that the adjacent side may be either a positive or negative number. Similarly, the sine of the same angle is the quotient of the opposite side over the hypotenuse (where the opposite side might be negative). These definitions allow one to create tables of trigonometric functions for practical use in solving right triangles. For example, Ptolemy computed tables in half degree increments.
The remaining trigonometric functions are:
Example 19: Dividing the identity , by and using the above definitions, one obtains the identity:
Similarly, by dividing by , one obtains the identity:
Example 20: Let be the angle opposite the side of length 3 in a right triangle whose sides are of length 3, 4, and 5. Then the trigonometric functions of are:
The values of the trigonometric functions at a number of commonly occurring angles are given in the table below:
Rather than memorize the table, it is easier to keep in mind the figure above used to define the trig functions and reconstruct the two standard triangles shown below.
There are many more important results from trigonometry. The most important are:
Proposition 7:(Addition Formulas) For arbitrary angles and , one has:
Proposition 8: Let be any triangle and a, b, c be the lengths of the sides opposite angles A, B, and C respectively. Then
You will have an opportunity to prove these results in the exercises.
All contents © copyright 2001 K. K. Kubota. All rights reserved | http://www.msc.uky.edu/ken/ma109/lectures/review.htm | 13 |
17 | Principles of Philosophy
Now that Descartes has found a piece of certain knowledge—that he exists as a thinking thing—he starts to look around for more of these self- evident truths. He discovers that he has quite a few of them, prominent among these being the truths of mathematics and logic, and he is optimistic about his chances for developing a system of certain knowledge. Then he realizes a kink in his plan. These clear and distinct perceptions are only indubitable so long as he is attending to them. As soon as they fall out of awareness, the doubt can creep back in. Once again, he can begin to wonder whether it was an evil demon who caused him to believe in the certainty of these truths. Suddenly, things do not look too rosy for his system of certain knowledge; if he needs to keep every truth perpetually before his mind, then he cannot expect too make much headway in unraveling the facts of nature.
Descartes' solution is to bring God into the picture. By proving that God is the cause of our clear and distinct perception, and that, further, God is perfect in every way and thus no deceiver, he will be able to secure lasting certainty for clear and distinct perceptions. He, therefore, sets out to prove that God exists.
Descartes gives at least two arguments for God's existence. The first one, found in I.14, is a version of the ontological argument for God's existence. Descartes' ontological argument goes as follows: (1) Our idea of God is of a perfect being, (2) it is more perfect to exist than not to exist, (3) therefore, God must exist.
The second argument that Descartes gives for this conclusion is far more complex. This argument rests on the distinction between two sorts of reality. Formal reality is the reality that anything has in virtue of existing. It is just regular, garden-variety reality. Formal reality comes in three grades: infinite, finite, and mode. God is the only existing thing with infinite formal reality. Substances all have finite formal reality. Finally, modes have modal formal reality. An idea, insofar as it is considered as an occurent piece of thought, has modal formal reality (since any particular thought, as we will see later, is just a mode of mind).
Ideas, however, also have another kind of reality, unique to them. When considered in their relation to the objects they represent, ideas can be said to have objective reality. There are three grades of objective reality, precisely mirroring the three grades of formal reality. The amount of objective reality contained in an idea is determined solely on the basis of the amount of formal reality contained in the object represented by the idea.
Descartes begins the argument by making the controversial claim that we all have an idea of God as an infinite being. (He believes that we cannot fail to have this idea because he thinks it is innate.) Because our idea of God is of an infinite being, it must have infinite objective reality. Next, Descartes appeals to an innate logical principle: something cannot come from nothing. Reasoning from this principle he arrives at two other causal principles: (1) There must be as much reality in a cause as in an effect, and so, (2) there must be as much formal reality in a cause of an idea as there is objective reality in an idea. Since we have an idea with infinite objective reality (namely, the idea of God), Descartes is able to conclude that there is a being with infinite formal reality who caused this idea. In other words, God exists.
One of the most famous objections to Descartes' philosophy attacks his use of the proof of God in order to validate clear and distinct perceptions. The objection, often referred to as the "Cartesian Circle," is that Descartes uses God to prove the truth of clear and distinct perceptions and also uses clear and distinct perceptions to prove the existence of God. How can he use clear and distinct perceptions to prove God's existence, these critics ask, if he needs God in order to prove that clear and distinct perceptions to tell us the truth? This does, indeed, sound like circular reasoning.
Descartes, however, has not made this foolish mistake. God's existence does not prove that clear and distinct perceptions are true. We do not need any proof that clear and distinct perceptions are true. In fact, what it means for something to be a clear and distinct perception is that, so long as we are attending to it, we cannot possibly doubt its truth. God is only needed to ensure that doubt does not creep in after we stop attending to these perceptions. Descartes, then, can legitimately use clear and distinct perceptions to prove God's existence. In the proof of God's existence we are using clear and distinct perceptions that we are attending to, and so we cannot doubt their truth. After we prove God's existence, the only thing that changes is that now we do not have to keep attending to these perceptions to be certain that they are true.
There are, however, other problems with Descartes' arguments for the existence of God. The ontological argument is particularly faulty. Ontological arguments are common in the history of philosophy. The medieval philosopher St. Anselm gave a famous version of the ontological argument, and even Plato puts an ontological argument in Socrates' mouth in the Phaedo. Nicolas Malebranche, Baruch Spinoza, and G.W. Leibniz all have their own versions of the ontological argument.
In fact, in order to be a proper Cartesian rationalist (i.e. someone who believes that the entire world can be explained in terms of a chain of logical connections and that we have access to this explanation) you have to believe in the possibility of an ontological argument. Without an ontological argument, explanation must either end in some brute, unexplained fact, or turn into an infinite regress, where the there is no end to explanation. In order to ensure that explanation comes to a final halt (and a halt with no loose, unexplained end), it is necessary that there be some level of reality that causes itself, something that is its own explanation. The only plausible candidate for an entity that is its own explanation is God. And the only way for God to be his own explanation is for some version of the ontological argument to work.
To understand why a self-causing thing is necessary to bring explanation to a satisfying end, consider what would happen if there were no such self-causing thing (which, unfortunately, there probably is not): in order to explain any fact, you would have to appeal to another fact, and then, to explain that fact, to another, and, for that one, to another, and infinitely on. Unless, of course, you ended up at a fact that simply could not be explained, in which you would not have managed to give an explanation for everything in the world. Now imagine that there is something that is its own explanation: in order to explain a fact, you have to appeal to another fact, and to explain that fact, to another, and on and on, until, ultimately, you hit upon a final fact that explains itself. Everything has been explained. There are no loose ends. The rationalist's job is done.
Unfortunately, as appealing as this picture of explanation is, ontological arguments involve a severe logical fallacy. They simply do not work. Immanuel Kant was the first to point this problem out, although he himself had given his own version of the ontological argument years earlier. The reason that the ontological argument cannot work is because it treats the existential verb (i.e. to be) as a property like other properties, a property that something can either have or not have. Clearly, though, existence is not a property like other properties. It is not even logically coherent to say "God does not have existence." If God does not exist, he cannot have properties, and he also cannot not have properties. He simply is not. The rationalists and those before them, failed to notice this big difference separating existence from other properties.
The causal argument also has its fair share of problems. The strange notions of reality that Descartes introduces are easy prey to attack. Why claim, for instance, that there is any special kind of reality called "objective reality?" Why assume, for that matter, that reality comes in grades that are so metaphysically loaded? Even more fatal than these legitimate worries, however, is the fact that Descartes' central claim is demonstrably false. We do not all have a clear and distinct innate idea of God as a being of infinite perfection. The only people who have this idea are those who were raised in cultures where the notion of a single and perfect supreme being was prevalent.
Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note! | http://www.sparknotes.com/philosophy/principles/section4.rhtml | 13 |
15 | The main explanation for the origins of the American Civil War is slavery, especially Southern anger at the attempts by Northern antislavery political forces to block the expansion of slavery into the western territories. States' rights and the tariff issue became entangled in the slavery issue, and were intensified by it. Other important factors were party politics, Abolitionism, Southern nationalism, Northern nationalism, expansionism, sectionalism, economics and modernization in the Antebellum Period.
The United States (U.S.) was a nation divided into two distinct regions separated by the Mason-Dixon line. New England, the Northeast and the Midwest had a rapidly-growing economy based on family farms, industry, mining, commerce and transportation, with a large and rapidly growing urban population and no slavery north of the border states. Its growth was fed by a high birth rate and large numbers of European immigrants, especially Irish, British and German.
The South was dominated by a settled plantation system based on slavery; with rapid growth taking place in the Southwest, such as Texas, based on high birth rates and high migration from the Southeast, but a lower rate of immigration from Europe. There were fewer large cities, and little manufacturing except in border areas. Slave owners controlled politics and economics. Two-thirds of the Southern whites owned no slaves and usually were engaged in subsistence agriculture.
Overall, the Northern population was growing much more quickly than the Southern population, which made it increasingly difficult for the South to continue to influence the national government. Southerners were worried about the relative political decline of their region because the North was growing much faster in terms of population and industrial output.
In the interest of maintaining unity, politicians had mostly moderated opposition to slavery, resulting in numerous compromises such as the Missouri Compromise of 1820. After the Mexican-American War, the issue of slavery in the new territories led to the Compromise of 1850. While the compromise averted an immediate political crisis, it did not permanently resolve the issue of the Slave power (the power of slaveholders to control the national government).
Amid the emergence of increasingly virulent and hostile sectional ideologies in national politics, the collapse of the old Second Party System in the 1850s hampered efforts of the politicians to reach yet one more compromise. The compromise that was reached (the Kansas-Nebraska Act) outraged too many northerners. In the 1850s, with the rise of the Republican Party, the first major party with no appeal in the South, the industrializing North and agrarian Midwest became committed to the economic ethos of free-labor industrial capitalism.
Arguments that slavery was undesirable for the nation had long existed. After 1840, abolitionists denounced slavery as more than a social evil: it was a moral wrong. Many Northerners, especially leaders of the new Republican Party, considered slavery a great national evil and believed that a small number of Southern owners of large plantations controlled the national government with the goal of spreading that evil.
In 1860, the election of Abraham Lincoln, who won the national election without receiving a single electoral vote from any of the Southern states, triggered the secession of the cotton states of the Deep South from the union and their formation of the Confederate States of America.
At the time of the American Revolution, the institution of slavery was firmly established in the American colonies. It was most important in the five southern states from Maryland to Georgia, but the total of a half million slaves were spread out through all of the colonies. In the South 40% of the population was made up of slaves, and as Americans moved into Kentucky and the rest of the southwest fully one-sixth of the settlers were slaves. By the end of the war, the New England states provided most of the American ships that were used in the foreign slave trade while most of their customers were in Georgia and the Carolinas.
During this time many Americans found it difficult to reconcile slavery with their Christian beliefs and the lofty sentiments that flowed from the Declaration of Independence. A small antislavery movement, led by the Quakers, had some impact in the 1780s and by the late 1780s all of the states except for Georgia had placed some restrictions on their participation in slave trafficking. Still, no serious national political movement against slavery developed, largely due to the overriding concern over achieving national unity. When the Constitutional Convention met, slavery was the one issue "that left the least possibility of compromise, the one that would most pit morality against pragmatism. In the end, while many would take comfort in the fact that the word slavery never occurs in the Constitution, critics note that the three-fifths clause provided slaveholders with extra representatives in Congress, the requirement of the federal government to suppress domestic violence would dedicate national resources to defending against slave revolts, a twenty year delay in banning the import of slaves allowed the South to fortify its labor needs, and the amendment process made the national abolition of slavery very unlikely in the foreseeable future.
With the outlawing of the African slave trade on January 1, 1808 many Americans felt that the slavery issue was resolved. Any national discussion that might have continued over slavery was drowned out by the years of trade embargoes, maritime competition with Great Britain and France, and, finally, the War of 1812. The one exception to this quiet regarding slavery was New Englanders association of their frustration with the war with their resentment of the three-fifths clause that seemed to allow the South to dominate national politics.
In the aftermath of the American Revolution, the northern states (north of the Mason-Dixon Line separating Pennsylvania and Maryland) abolished slavery by 1804. In the 1787 Northwest Ordinance, Congress (still under the Articles of Confederation) barred slavery from the Mid-Western territory north of the Ohio River, but when the U.S. Congress organized the southern territories acquired through the Louisiana Purchase, the ban on slavery was omitted.
In 1819 Congressman James Tallmadge Jr. of New York initiated an uproar in the South when he proposed two amendments to a bill admitting Missouri to the Union as a free state. The first barred slaves from being moved to Missouri, and the second would free all Missouri slaves born after admission to the Union at age 25. With the admission of Alabama as a slave state in 1819, the U.S. was equally divided with 11 slave states and 11 free states. The admission of the new state of Missouri as a slave state would give the slave states a majority in the Senate; the Tallmadge Amendment would gave the free states a majority.
The Tallmadge amendments passed the House of Representatives but failed in the Senate when five Northern Senators voted with all the Southern senators. The question was now the admission of Missouri as a slave state, and many leaders shared Thomas Jefferson's fear of a crisis over slavery--a fear that Jefferson described as "a fire bell in the night". The crisis was solved by the Compromise of 1820, which admitted Maine to the Union as a free state at the same time that Missouri was admitted as a slave state. The Compromise also banned slavery in the Louisiana Purchase territory north and west of the state of Missouri along the line of 36-30. The Missouri Compromise quieted the issue until its limitations on slavery were repealed by the Kansas Nebraska Act of 1854.
In the South the Missouri crisis reawakened old fears that a strong federal government could be a fatal threat to slavery. The Jeffersonian coalition that united southern planters and northern farmers, mechanics and artisans in opposition to the threat presented by the Federalist Party had started to dissolve after the War of 1812. It was not until the Missouri crisis that Americans became aware of the political possibilities of a sectional attack on slavery, and it was not until the mass politics of the Jackson Administration that this type of organization around this issue became practical.
The American System, advocated by Henry Clay in Congress and supported by many nationalist supporters of the War of 1812 such as John C. Calhoun, was a program for rapid economic modernization featuring protective tariffs, internal improvements at Federal expense, and a national bank. The purpose was to develop American industry and international commerce. Since iron, coal, and water power were mainly in the North, this tax plan was doomed to cause rancor in the South where economies were agriculture-based. Southerners claimed it demonstrated favoritism toward the North.
The nation suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. The highly protective Tariff of 1828 (also called the "Tariff of Abominations"), designed to protect American industry by taxing imported manufactured goods, was enacted into law during the last year of the presidency of John Quincy Adams. Opposed in the South and parts of New England, the expectation of the tariff’s opponents was that with the election of Andrew Jackson the tariff would be significantly reduced.
By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and his vice-president John C. Calhoun, the most effective proponent of the constitutional theory of state nullification through his 1828 "South Carolina Exposition and Protest.
Congress enacted a new tariff in 1832, but it offered the state little relief, resulting in the most dangerous sectional crisis since the Union was formed. Some militant South Carolinians even hinted at withdrawing from the Union in response. The newly-elected South Carolina legislature then quickly called for the election of delegates to a state convention. Once assembled, the convention voted to declare null and void the tariffs of 1828 and 1832 within the state. President Andrew Jackson responded firmly, declaring nullification an act of treason. He then took steps to strengthen federal forts in the state.
Violence seemed a real possibility early in 1833 as Jacksonians in Congress introduced a "Force Bill" authorizing the President to use the Federal army and navy in order to enforce acts of Congress. No other state had come forward to support South Carolina, and the state itself was divided on willingness to continue the showdown with the Federal government. The crisis ended when Clay and Calhoun worked to devise a compromise tariff. Both sides later claimed victory. Calhoun and his supporters in South Carolina claimed a victory for nullification, insisting that it had forced the revision of the tariff. Jackson's followers, however, saw the episode as a demonstration that no single state could assert its rights by independent action.
Calhoun, in turn, devoted his efforts to building up a sense of Southern solidarity so that when another standoff should come, the whole section might be prepared to act as a bloc in resisting the federal government. As early as 1830, in the midst of the crisis, Calhoun identified the right to own slaves as the chief southern minority right being threatened:
I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness.
The issue appeared again after 1842's Black Tariff. A period of relative free trade after 1846's Walker Tariff reduction followed until 1860, when the protectionist Morrill Tariff was introduced by the Republicans, fueling Southern anti-tariff sentiments once again.
There had been a continuing contest between the states and the national government over the power of the latter—and over the loyalty of the citizenry—almost since the founding of the republic. The Kentucky and Virginia Resolutions of 1798, for example, had defied the Alien and Sedition Acts, and at the Hartford Convention, New England voiced its opposition to President James Madison and the War of 1812, and discussed secession from the Union.
Although there was only a small minority of free Southerners that ever owned slaves (and, in turn, a minority of similar proportion within these slaveholders who owned the vast majority of slaves), Southerners of all classes nevertheless defended the institution of slavery– threatened by the rise of free labor abolitionist movements in the Northern states– as the cornerstone of their social order.
Based on a system of plantation slavery, the social structure of the South was far more stratified and patriarchal than that of the North. In 1850 there were around 350,000 slaveholders in a total free Southern population of about six million. Among slaveholders, the concentration of slave ownership was unevenly distributed. Perhaps around 7 percent of slaveholders owned roughly three-quarters of the slave population. The largest slaveholders, generally owners of large plantations, represented the top stratum of Southern society. They benefited from economies of scale and needed large numbers of slaves on big plantations to produce profitable labor-intensive crops like cotton. This plantation-owning elite, known as "slave magnates", was comparable to the millionaires of the following century.
In the 1850s as large plantation owners out-competed smaller farmers, more slaves were owned by fewer planters. Yet, while the proportion of the white population consisting of slaveholders was on the decline on the eve of the Civil War—perhaps falling below around a quarter of free southerners in 1860—poor whites and small farmers generally accepted the political leadership of the planter elite.
Several factors helped explain why slavery was not under serious threat of internal collapse from any moves for democratic change initiated from the South. First, given the opening of new territories in the West for white settlement, many non-slaveowners also perceived a possibility that they, too, might own slaves at some point in their life.
Second, small free farmers in the South often embraced hysterical racism, making them unlikely agents for internal democratic reforms in the South. The principle of white supremacy, accepted by almost all white southerners of all classes, made slavery seem legitimate, natural, and essential for a civilized society. White racism in the South was sustained by official systems of repression such as the "slave codes" and elaborate codes of speech, behavior, and social practices illustrating the subordination of blacks to whites. For example, the "slave patrols" were among the institutions bringing together southern whites of all classes in support of the prevailing economic and racial order. Serving as slave "patrollers" and "overseers" offered white southerners positions of power and honor. These positions gave even poor white southerners the authority to stop, search, whip, maim, and even kill any slave traveling outside his or her plantation. Slave "patrollers" and "overseers" also won prestige in their communities. Policing and punishing blacks who transgressed the regimentation of slave society was a valued community service in the South, where the fear of free blacks threatening law and order figured heavily in the public discourse of the period.
Third, many small farmers with a few slaves and yeomen were linked to elite planters through the market economy. In many areas, small farmers depended on local planter elites for vital goods and services including (but not limited to) access to cotton gins, access to markets, access to feed and livestock, and even for loans (since the banking system was not well developed in the antebellum South). Southern tradesmen often depended on the richest planters for steady work. Such dependency effectively deterred many white non-slaveholders from engaging in any political activity that was not in the interest of the large slaveholders. Furthermore, whites of varying social castes, including poor whites and "plain folk" who worked outside or in the periphery of the market economy (and therefore lacked any real economic interest in the defense of slavery) might nonetheless be linked to elite planters through extensive kinship networks. Since inheritance in the South was often unequitable (and generally favored eldest sons), it was not uncommon for a poor white person to be perhaps the first cousin of the richest plantation owner of his county and to share the same militant support of slavery as his richer relatives. Finally, it should be remembered that there was no secret ballot at the time anywhere in the United States - this innovation did not become widespread in the U.S. until the 1880s. For a typical white Southerner, this meant that so much as casting a ballot against the wishes of the establishment meant running the risk of social ostracization.
Thus, by the 1850s, Southern slaveholders and non-slaveholders alike felt increasingly encircled psychologically and politically in the national political arena because of the rise of free soilism and abolitionism in the Northern states. Increasingly dependent on the North for manufactured goods, for commercial services, and for loans, and increasingly cut off from the flourishing agricultural regions of the Northwest, they faced the prospects of a growing free labor and abolitionist movement in the North.
With the outcry over developments in Kansas strong in the North, defenders of slavery— increasingly committed to a way of life that abolitionists and their sympathizers considered obsolete or immoral— articulated a militant pro-slavery ideology that would lay the groundwork for secession upon the election of a Republican president. Southerners waged a vitriolic response to political change in the North. Slaveholding interests sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile" and "ruinous" legislation. Behind this shift was the growth of the cotton industry, which left slavery more important than ever to the Southern economy.
Reactions to the popularity of Uncle Tom's Cabin (1852) by Harriet Beecher Stowe (whom Abraham Lincoln reputedly called "the little woman that started this great war") and the growth of the abolitionist movement (pronounced after the founding of The Liberator in 1831 by William Lloyd Garrison) inspired an elaborate intellectual defense of slavery. Increasingly vocal (and sometimes violent) abolitionist movements, culminating in John Brown's raid on Harpers Ferry in 1859 were viewed as a serious threat, and—in the minds of many Southerners—abolitionists were attempting to foment violent slave revolts as seen in Haiti in the 1790s and as attempted by Nat Turner some three decades prior (1831).
After J. D. B. DeBow established De Bow's Review in 1846, it grew to become the leading Southern magazine, warning the planter class about the dangers of depending on the North economically. De Bow's Review also emerged as the leading voice for secession. The magazine emphasized the South's economic inequality, relating it to the concentration of manufacturing, shipping, banking and international trade in the North. Searching for Biblical passages endorsing slavery and forming economic, sociological, historical and scientific arguments, slavery went from being a "necessary evil" to a "positive good". Dr. J.H. Van Evrie's book Negroes and Negro slavery: The First an Inferior Race: The Latter Its Normal Condition– setting out the arguments the title would suggest– was an attempt to apply scientific support to the Southern arguments in favor of race based slavery.
Latent sectional divisions suddenly activated derogatory sectional imagery which emerged into sectional ideologies. As industrial capitalism gained momentum in the North, Southern writers emphasized whatever aristocratic traits they valued (but often did not practice) in their own society: courtesy, grace, chivalry, the slow pace of life, orderly life and leisure. This supported their argument that slavery provided a more humane society than industrial labor.
In his Cannibals All!, George Fitzhugh argued that the antagonism between labor and capital in a free society would result in "robber barons" and "pauper slavery", while in a slave society such antagonisms were avoided. He advocated enslaving Northern factory workers, for their own benefit. Abraham Lincoln, on the other hand, denounced such Southern insinuations that Northern wage earners were fatally fixed in that condition for life. To Free Soilers, the stereotype of the South was one of a diametrically opposite, static society in which the slave system maintained an entrenched anti-democratic aristocracy.
According to the historian James McPherson, exceptionalism applied not to the South but to the North after the North phased out slavery and launched an industrial revolution that led to urbanization, which in turn led to increased education, which in its own turn gave ever-increasing stength to various reform movements but especially abolitionism. The fact that seven immigrants out of eight settled in the North (and the fact that most immigrants viewed slavery with disfavor), compounded by the fact that twice as many whites left the South for the North as vice versa, contributed to the South's defensive-aggressive political behavior. The Charleston Mercury read that on the issue of slavery the North and South "are not only two Peoples, but they are rival, hostile Peoples." As De Bow's Review said, "We are resisting revolution.... We are not engaged in a Quixotic fight for the rights of man.... We are conservative."
Allan Nevins argued that the Civil War was an "irrepressible" conflict. Nevins synthesized contending accounts emphasizing moral, cultural, social, ideological, political, and economic issues. In doing so, he brought the historical discussion back to an emphasis on social and cultural factors. Nevins pointed out that the North and the South were rapidly becoming two different peoples, a point made also by historian Avery Craven. At the root of these cultural differences was the problem of slavery, but fundamental assumptions, tastes, and cultural aims of the regions were diverging in other ways as well. More specifically, the North was rapidly modernizing in a manner threatening to the South. Historian James McPherson explains:
When secessionists protested in 1861 that they were acting to preserve traditional rights and values, they were correct. They fought to preserve their constitutional liberties against the perceived Northern threat to overthrow them. The South's concept of republicanism had not changed in three-quarters of a century; the North's had.... The ascension to power of the Republican Party, with its ideology of competitive, egalitarian free-labor capitalism, was a signal to the South that the Northern majority had turned irrevocably towards this frightening, revolutionary future.
Harry L. Watson has synthesized research on antebellum southern social, economic, and political history. Self-sufficient yeomen, in Watson's view, "collaborated in their own transformation" by allowing promoters of a market economy to gain political influence. Resultant "doubts and frustrations" provided fertile soil for the argument that southern rights and liberties were menaced by Black Republicanism.
J. Mills Thornton III, explained the viewpoint of the average white Alabamian. Thornton contends that Alabama was engulfed in a severe crisis long before 1860. Deeply held principles of freedom, equality, and autonomy, as expressed in republican values appeared threatened, especially during the 1850s, by the relentless expansion of market relations and commercial agriculture. Alabamians were thus, he judged, prepared to believe the worst once Lincoln was elected.
The politicians of the 1850s were acting in a society in which the traditional restraints that suppressed sectional conflict in the 1820s and 1850s– the most important of which being the stability of the two-party system– were being eroded as this rapid extension of mass democracy went forward in the North and South. It was an era when the mass political party galvanized voter participation to an unprecedented degree, and a time in which politics formed an essential component of American mass culture. Historians agree that political involvement was a larger concern to the average American in the 1850s than today. Politics was, in one of its functions, a form of mass entertainment, a spectacle with rallies, parades, and colorful personalities. Leading politicians, moreover, often served as a focus for popular interests, aspirations, and values.
Historian Allan Nevins, for instance, writes of political rallies in 1856 with turnouts of anywhere from twenty to fifty thousand men and women. Voter turnouts even ran as high as 84% by 1860. A plethora of new parties emerged 1854-56, including the Republicans, People's party men, Anti-Nebraskans, Fusionists, Know-Nothings, Know-Somethings (anti-slavery nativists), Maine Lawites, Temperance men, Rum Democrats, Silver Gray Whigs, Hindus, Hard Shell Democrats, Soft Shells, Half Shells and Adopted Citizens. By 1858, they were mostly gone, and politics divided four ways. Republicans controlled most Northern states with a strong Democratic minority. The Democrats were split North and South and fielded two tickets in 1860. Southern non-Democrats tried different coalitions; most supported the Constitutional Union party in 1860.
Many Southern states held constitutional conventions in 1851 to consider the questions of nullification and secession. With the exception of South Carolina, whose convention election did not even offer the option of "no secession" but rather "no secession without the collaboration of other states," the Southern conventions were dominated by Unionists who voted down articles of secession.
Historians today generally agree that economic conflicts were not a major cause of the war. While an economic basis to the sectional crisis was popular among the “Progressive school” of historians from the 1910s to the 1940s, few 'professional historians' now subscribe to this explanation. According to economic historian Lee A. Craig, "In fact, numerous studies by economic historians over the past several decades reveal that economic conflict was not an inherent condition of North-South relations during the antebellum era and did not cause the Civil War."
When numerous groups tried at the last minute in 1860-61 to find a compromise to avert war, they did not turn to economic policies. The three major attempts at compromise, the Crittenden Compromise, the Corwin Amendment and the Washington Peace Conference, addressed only the slavery-related issues of fugitive slave laws, personal liberty laws, slavery in the territories and interference with slavery within the existing slave states.
Historian James L. Huston emphasizes the role of slavery as an economic institution. In October 1860 William Lowndes Yancey, a leading advocate of secession, placed the value of Southern-held slaves at $2.8 billion. Huston writes:
Understanding the relations between wealth, slavery, and property rights in the South provides a powerful means of understanding southern political behavior leading to disunion. First, the size dimensions of slavery are important to comprehend, for slavery was a colossal institution. Second, the property rights argument was the ultimate defense of slavery, and white southerners and the proslavery radicals knew it. Third, the weak point in the protection of slavery by property rights was the federal government.... Fourth, the intense need to preserve the sanctity of property rights in Africans led southern political leaders to demand the nationalization of slavery– the condition under which slaveholders would always be protected in their property holdings.
The figures quoted by Southerners in the 1850s would equate to about $70 billion in 2009 dollars. That point being noted, the figures need to be interpreted in the context of American economic perspectives as they existed in the mid-19th century. Back then, the United States would have to be considered (in a purely financial sense) a poor country by any modern standard, and (relatively speaking) the South was the poorest part of the country. To most Americans living at the time, one million dollars would have been considered far more vast a fortune in the 1850s than even an inflation-adjusted sum would be considered today, let alone any sum in the billions of dollars.
The cotton gin greatly increased the efficiency with which cotton could be harvested, contributing to the consolidation of "King Cotton" as the backbone of the economy of the Deep South, and to the entrenchment of the system of slave labor on which the cotton plantation economy depended.
The tendency of monoculture cotton plantings to lead to soil exhaustion created a need for cotton planters to move their operations to new lands, and therefore to the westward expansion of slavery from the Eastern seaboard into new areas (e.g., Alabama, Mississippi, and beyond to East Texas).
The South, Midwest, and Northeast had quite different economic structures. They traded with each other and each became more prosperous by staying in the Union, a point many businessmen made in 1860-61. However Charles Beard in the 1920s made a highly influential argument to the effect that these differences caused the war (rather than slavery or constitutional debates). He saw the industrial Northeast forming a coalition with the agrarian Midwest against the Plantation South. Critics pointed out that his image of a unified Northeast was incorrect because the region was highly diverse with many different competing economic interests. In 1860-61, most business interests in the Northeast opposed war.
After 1950, only a few mainstream historians accepted the Beard interpretation, though it was accepted by libertarian economists. As Historian Kenneth Stampp—who abandoned Beardianism after 1950, sums up the scholarly consensus: "Most historians...now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united."
Historian Eric Foner has argued that a free-labor ideology dominated thinking in the North, which emphasized economic opportunity. By contrast, Southerners described free labor as "greasy mechanics, filthy operators, small-fisted farmers, and moonstruck theorists". They strongly opposed the homestead laws that were proposed to give free farms in the west, fearing the small farmers would oppose plantation slavery. Indeed, opposition to homestead laws was far more common in secessionist rhetoric than opposition to tariffs. Southerners such as Calhoun argued that slavery was "a positive good", and that slaves were more civilized and morally and intellectually improved because of slavery.
Led by Mark Noll, a body of scholarship has highlighted the fact that the American debate over slavery became a shooting war in part because the two sides reached diametrically opposite conclusions based on reading the same authoritative source of guidance on moral questions: the King James Version of the Bible.
After the American Revolution and the disestablishment of government-sponsored churches, the U.S. experienced the Second Great Awakening, a massive Protestant revival. Without centralized church authorities, American Protestantism was heavily reliant on the Bible, which was read in the standard 19th-century Reformed hermeneutic of "common sense", literal interpretation as if the Bible were speaking directly about the modern American situation instead of events that occurred in a much different context, millennia ago. By the mid-1800s this form of religion and Bible interpretation had become a dominant strand in American religious, moral and political discourse, almost serving as a de facto state religion.
The problem that this caused for resolving the slavery question was that the Bible, interpreted under these assumptions, seemed to clearly suggest that slavery was Biblically justified:
Protestant churches in the U.S., unable to agree on what God's Word said about slavery, ended up with schisms between Northern and Southern branches: the Methodists in 1844, the Baptists in 1845, and the Presbyterians in 1857. These splits presaged the subsequent split in the nation: "The churches played a major role in the dividing of the nation, and it is probably true that it was the splits in the churches which made a final split of the national inevitable." The conflict over how to interpret the Bible was central:
There were many causes of the Civil War, but the religious conflict, almost unimaginable in modern America, cut very deep at the time. Noll and others highlight the significance of the religion issue for the famous phrase in Lincoln's second inaugural: "Both read the same Bible and pray to the same God, and each invokes His aid against the other."
Antislavery movements in the North gained momentum in the 1830s and 1840s, a period of rapid transformation of Northern society that inspired a social and political reformism. Many of the reformers of the period, including abolitionists, attempted in one way or another to transform the lifestyle and work habits of labor, helping workers respond to the new demands of an industrializing, capitalistic society.
Antislavery, like many other reform movements of the period, was influenced by the legacy of the Second Great Awakening, a period of religious revival in the new country stressing the reform of individuals which was still relatively fresh in the American memory. Thus, while the reform spirit of the period was expressed by a variety of movements with often-conflicting political goals, most reform movements shared a common feature in their emphasis on the Great Awakening principle of transforming the human personality through discipline, order, and restraint.
"Abolitionist" had several meanings at the time. The followers of William Lloyd Garrison, including Wendell Phillips and Frederick Douglass, demanded the "immediate abolition of slavery", hence the name. A more pragmatic group of abolitionists, like Theodore Weld and Arthur Tappan, wanted immediate action, but that action might well be a program of gradual emancipation, with a long intermediate stage. "Antislavery men", like John Quincy Adams, did what they could to limit slavery and end it where possible, but were not part of any abolitionist group. For example, in 1841 Adams represented the Amistad African slaves in the Supreme Court of the United States and argued that they should be set free. In the last years before the war, "antislavery" could mean the Northern majority, like Abraham Lincoln, who opposed expansion of slavery or its influence, as by the Kansas-Nebraska Act, or the Fugitive Slave Act. Many Southerners called all these abolitionists, without distinguishing them from the Garrisonians. James McPherson explains the abolitionists' deep beliefs: "All people were equal in God's sight; the souls of black folks were as valuable as those of whites; for one of God's children to enslave another was a violation of the Higher Law, even if it was sanctioned by the Constitution."
Stressing the Yankee Protestant ideals of self-improvement, industry, and thrift, most abolitionists– most notably William Lloyd Garrison– condemned slavery as a lack of control over one's own destiny and the fruits of one's labor.
The experience of the fifty years… shows us the slaves trebling in numbers—slaveholders monopolizing the offices and dictating the policy of the Government—prostituting the strength and influence of the Nation to the support of slavery here and elsewhere—trampling on the rights of the free States, and making the courts of the country their tools. To continue this disastrous alliance longer is madness.… Why prolong the experiment?
Abolitionists also attacked slavery as a threat to the freedom of white Americans. Defining freedom as more than a simple lack of restraint, antebellum reformers held that the truly free man was one who imposed restraints upon himself. Thus, for the anti-slavery reformers of the 1830s and 1840s, the promise of free labor and upward social mobility (opportunities for advancement, rights to own property, and to control one's own labor), was central to the ideal of reforming individuals.
Controversy over the so-called Ostend Manifesto (which proposed the U.S. annexation of Cuba as a slave state) and the Fugitive Slave Act kept sectional tensions alive before the issue of slavery in the West could occupy the country's politics in the mid-to-late 1850s.
Antislavery sentiment among some groups in the North intensified after the Compromise of 1850, when Southerners began appearing in Northern states to pursue fugitives or often to claim as slaves free African Americans who had resided there for years. Meanwhile, some abolitionists openly sought to prevent enforcement of the law. Violation of the Fugitive Slave Act was often open and organized. In Boston– a city from which it was boasted that no fugitive had ever been returned– Theodore Parker and other members of the city's elite helped form mobs to prevent enforcement of the law as early as April 1851. A pattern of public resistance emerged in city after city, notably in Syracuse in 1851 (culminating in the Jerry Rescue incident late that year), and Boston again in 1854. But the issue did not lead to a crisis until revived by the same issue underlying the Missouri Compromise of 1820: slavery in the territories.
William Lloyd Garrison, a prominent abolitionist, was motivated by a belief in the growth of democracy. Because the Constitution had a three-fifths clause, a fugitive slave clause and a 20-year extension of the Atlantic slave trade, Garrison once publicly burned a copy of the U. S. Constitution and called it "a covenant with death and an agreement with hell". In 1854, he said:
|“||I am a believer in that portion of the Declaration of American Independence in which it is set forth, as among self-evident truths, "that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness." Hence, I am an abolitionist. Hence, I cannot but regard oppression in every form—and most of all, that which turns a man into a thing—with indignation and abhorrence.||”|
|“||(Thomas Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its corner-stone rests, upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition.||”|
The assumptions, tastes, and cultural aims of the reformers of the 1830s and 1840s anticipated the political and ideological ferment of the 1850s. A surge of working class Irish and German Catholic immigration provoked reactions among many Northern Whigs, as well as Democrats. Growing fears of labor competition for white workers and farmers because of the growing number of free blacks prompted several northern states to adopt discriminatory "Black Codes".
In the Northwest, although farm tenancy was increasing, the number of free farmers was still double that of farm laborers and tenants. Moreover, although the expansion of the factory system was undermining the economic independence of the small craftsman and artisan, industry in the region, still one largely of small towns, was still concentrated in small-scale enterprises. Arguably, social mobility was on the verge of contracting in the urban centers of the North, but long-cherished ideas of opportunity, "honest industry" and "toil" were at least close enough in time to lend plausibility to the free labor ideology.
In the rural and small-town North, the picture of Northern society (framed by the ethos of "free labor") corresponded to a large degree with reality. Propelled by advancements in transportation and communication– especially steam navigation, railroads, and telegraphs– the two decades before the Civil War were of rapid expansion in population and economy of the Northwest. Combined with the rise of Northeastern and export markets for their products, the social standing of farmers in the region substantially improved. The small towns and villages that emerged as the Republican Party's heartland showed every sign of vigorous expansion. Their vision for an ideal society was of small-scale capitalism, with white American laborers entitled to the chance of upward mobility opportunities for advancement, rights to own property, and to control their own labor. Many free-soilers demanded that the slave labor system and free black settlers (and, in places such as California, Chinese immigrants) should be excluded from the Great Plains to guarantee the predominance there of the free white laborer.
Opposition to the 1847 Wilmot Proviso helped to consolidate the "free-soil" forces. The next year, Radical New York Democrats known as Barnburners, members of the Liberty Party, and anti-slavery Whigs held a convention at Buffalo, New York, in August, forming the Free-Soil Party. The party supported former President Martin Van Buren and Charles Francis Adams, Sr., for President and Vice President, respectively. The party opposed the expansion of slavery into territories where it had not yet existed, such as Oregon and the ceded Mexican territory.
Relating Northern and Southern positions on slavery to basic differences in labor systems, but insisting on the role of culture and ideology in coloring these differences, Eric Foner's book Free Soil, Free Labor, Free Men (1970) went beyond the economic determinism of Charles Beard (a leading historian of the 1930s). Foner emphasized the importance of free labor ideology to Northern opponents of slavery, pointing out that the moral concerns of the abolitionists were not necessarily the dominant sentiments in the North. Many Northerners (including Lincoln) opposed slavery also because they feared that black labor might spread to the North and threaten the position of free white laborers. In this sense, Republicans and the abolitionists were able to appeal to powerful emotions in the North through a broader commitment to "free labor" principles. The "Slave Power" idea had a far greater appeal to Northern self-interest than arguments based on the plight of black slaves in the South. If the free labor ideology of the 1830s and 1840s depended on the transformation of Northern society, its entry into politics depended on the rise of mass democracy, in turn propelled by far-reaching social change. Its chance would come by the mid-1850s with the collapse of the traditional two-party system, which had long suppressed sectional conflict.
A series of resolutions beginning with a Pinckney Resolution banned petitions for ending slavery from being introduced before the United States House of Representatives from 1835 to 1844. These petitions were known as the Gag Rule, with Southern Representatives supporting the gag and Northern Whigs (especially John Quincy Adams) opposing the gag.
Soon after the Mexican War started and long before negotiation of the new US-Mexico border, the question of slavery in the territories to be acquired polarized the Northern and Southern United States in the most bitter sectional conflict up to this time, which lasted for a deadlock of four years during which the Second Party System broke up, Mormon pioneers settled Utah, the California Gold Rush settled California, and New Mexico under a federal military government turned back Texas's attempt to assert control over territory Texas claimed as far west as the Rio Grande. Eventually the Compromise of 1850 preserved the Union, but only for another decade. Proposals included:
States' rights was an issue in the 19th century for those who felt that the federal government was superseded by the authority of the individual states and was in violation of the role intended for it by the Founding Fathers of the United States. Kenneth M. Stampp notes that each section used states' rights arguments when convenient, and shifted positions when convenient. For example, the Fugitive Slave Act of 1850 was justified by its supporters as a state's right to have its property laws respected by other states, and was resisted by northern legislatures in the form of state personal liberty laws that placed state laws above the federal mandate.
Arthur M. Schlesinger, Jr. noted that the states' rights “never had any real vitality independent of underlying conditions of vast social, economic, or political significance.” He further elaborated:
From the close of the nullification episode of 1832-1833 to the outbreak of the Civil War, the agitation of state rights was intimately connected with the new issue of growing importance, the slavery question, and the principle form assumed by the doctrine was the right of secession. The pro-slavery forces sought refuge in the state rights position as a shield against federal interference with pro-slavery projects.... As a natural consequence, anti-slavery legislatures in the North were led to lay great stress on the national character of the Union and the broad powers of the general government in dealing with slavery. Nevertheless, it is significant to note that when it served anti-slavery purposes better to lapse into state rights dialectic, northern legislatures did not hesitate to be inconsistent.
Echoing Schlesinger, Forrest McDonald wrote that “the dynamics of the tension between federal and state authority changed abruptly during the late 1840s” as a result of the acquisition of territory in the Mexican War. McDonald states:
And then, as a by-product or offshoot of a war of conquest, slavery– a subject that leading politicians had, with the exception of the gag rule controversy and Calhoun’s occasional outbursts, scrupulously kept out of partisan debate– erupted as the dominant issue in that arena. So disruptive was the issue that it subjected the federal Union to the greatest strain the young republic had yet known.
States' rights theories were a response to a growing awareness of the fact that the Northern population was growing much faster than the population of the South, which concluded that it was only a matter of time before the North controlled the federal government. Acting as a "conscious minority," Southerners hoped that a strict, constructionist interpretation of the Constitution would limit federal power over the states, and that a defense of states' rights against federal encroachments or even nullification or secession would save the South. Before 1860, most presidents were either Southern or pro-South. The North's growing population would mean the election of pro-North presidents, and the addition of free-soil states would end Southern parity with the North in the Senate. As the historian Allan Nevins described the Southern politician John C. Calhoun's theory of states' rights, "Governments, observed Calhoun, were formed to protect minorities, for majorities could take care of themselves".
Until the 1860 election, the South’s interests nationally were entrusted to the Democratic Party. In 1860, the Democratic Party split into Northern and Southern factions as the result of a “bitter debate in the United States Senate between Jefferson Davis and Stephen Douglas.” The debate was over resolutions proposed by Davis “opposing popular sovereignty and supporting a federal slave code and states’ rights” which carried over to the national convention in Charleston.
Davis defined equality in terms of the equal rights of states, and opposed the declaration that all men are created equal. Jefferson Davis stated that a "disparaging discrimination" and a fight for "liberty" against "the tyranny of an unbridled majority" gave the Confederate states a right to secede. In 1860, Congressman Laurence M. Keitt of South Carolina said, "The anti-slavery party contend that slavery is wrong in itself, and the Government is a consolidated national democracy. We of the South contend that slavery is right, and that this is a confederate Republic of sovereign States."
Stampp mentioned Confederate Vice President Alexander Stephens' A Constitutional View of the Late War Between the States as an example of a Southern leader who said that slavery was the "cornerstone of the Confederacy" when the war began and then said that the war was not about slavery but states' rights after Southern defeat. Stampp said that Stephens became one of the most ardent defenders of the Lost Cause.
To the old Union they had said that the Federal power had no authority to interfere with slavery issues in a state. To their new nation they would declare that the state had no power to interfere with a federal protection of slavery. Of all the many testimonials to the fact that slavery, and not states rights, really lay at the heart of their movement, this was the most eloquent of all.
The victory of the United States over Mexico resulted in the addition of large new territories conquered from Mexico. Controversy over whether these territories would be slave or free raised the risk of a war between slave and free states, and Northern support for the Wilmot Proviso, which would have banned slavery in the conquered territories, increased sectional tensions. The controversy was temporarily resolved by the Compromise of 1850, which allowed the territories of Utah and New Mexico to decide for or against slavery, but also allowed the admission of California as a free state, reduced the size of the slave state of Texas by adjusting the boundary, and ended the slave trade (but not slavery itself) in the District of Columbia. In return, the South got a stronger fugitive slave law than the version mentioned in the Constitution. The Fugitive Slave Law would reignite controversy over slavery.
The Fugitive Slave Law of 1850 required that Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive. Anthony Burns was among the fugitive slaves captured and returned in chains to slavery as a result of the law. Harriett Beecher Stowe's best selling novel Uncle Tom's Cabin greatly increased opposition to the Fugitive Slave Law.
Most people thought the Compromise had ended the territorial issue, but Stephen A. Douglas reopened it in 1854, in the name of democracy. Douglas proposed the Kansas-Nebraska Bill with the intention of opening up vast new high quality farm lands to settlement. As a Chicagoan, he was especially interested in the railroad connections from Chicago into Kansas and Nebraska, but that was not a controversial point. More importantly, Douglas firmly believed in democracy at the grass roots—that actual settlers have the right to decide on slavery, not politicians from other states. His bill provided that popular sovereignty, through the territorial legislatures, should decide "all questions pertaining to slavery", thus effectively repealing the Missouri Compromise. The ensuing public reaction against it created a firestorm of protest in the Northern states. It was seen as an effort to repeal the Missouri Compromise. However, the popular reaction in the first month after the bill's introduction failed to foreshadow the gravity of the situation. As Northern papers initially ignored the story, Republican leaders lamented the lack of a popular response.
Eventually, the popular reaction did come, but the leaders had to spark it. Chase's "Appeal of the Independent Democrats" did much to arouse popular opinion. In New York, William H. Seward finally took it upon himself to organize a rally against the Nebraska bill, since none had arisen spontaneously. Press such as the National Era, the New York Tribune, and local free-soil journals, condemned the bill. The Lincoln-Douglas debates of 1858 drew national attention to the issue of slavery expansion.
Convinced that Northern society was superior to that of the South, and increasingly persuaded of the South's ambitions to extend slave power beyond its existing borders, Northerners were embracing a viewpoint that made conflict likely; but conflict required the ascendancy of the Republican Party. The Republican Party– campaigning on the popular, emotional issue of "free soil" in the frontier– captured the White House after just six years of existence.
The Republican Party grew out of the controversy over the Kansas-Nebraska legislation. Once the Northern reaction against the Kansas-Nebraska Act took place, its leaders acted to advance another political reorganization. Henry Wilson declared the Whig Party dead and vowed to oppose any efforts to resurrect it. Horace Greeley's Tribune called for the formation of a new Northern party, and Benjamin Wade, Chase, Charles Sumner, and others spoke out for the union of all opponents of the Nebraska Act. The Tribune's Gamaliel Bailey was involved in calling a caucus of anti-slavery Whig and Democratic Party Congressmen in May.
Meeting in a Ripon, Wisconsin, Congregational Church on February 28, 1854, some thirty opponents of the Nebraska Act called for the organization of a new political party and suggested that "Republican" would be the most appropriate name (to link their cause to the defunct Republican Party of Thomas Jefferson). These founders also took a leading role in the creation of the Republican Party in many northern states during the summer of 1854. While conservatives and many moderates were content merely to call for the restoration of the Missouri Compromise or a prohibition of slavery extension, radicals advocated repeal of the Fugitive Slave Laws and rapid abolition in existing states. The term "radical" has also been applied to those who objected to the Compromise of 1850, which extended slavery in the territories.
But without the benefit of hindsight, the 1854 elections would seem to indicate the possible triumph of the Know-Nothing movement rather than anti-slavery, with the Catholic/immigrant question replacing slavery as the issue capable of mobilizing mass appeal. Know-Nothings, for instance, captured the mayoralty of Philadelphia with a majority of over 8,000 votes in 1854. Even after opening up immense discord with his Kansas-Nebraska Act, Senator Douglas began speaking of the Know-Nothings, rather than the Republicans, as the principal danger to the Democratic Party.
When Republicans spoke of themselves as a party of "free labor," they appealed to a rapidly growing, primarily middle class base of support, not permanent wage earners or the unemployed (the working class). When they extolled the virtues of free labor, they were merely reflecting the experiences of millions of men who had "made it" and millions of others who had a realistic hope of doing so. Like the Tories in England, the Republicans in the United States would emerge as the nationalists, homogenizers, imperialists, and cosmopolitans.
Those who had not yet "made it" included Irish immigrants, who made up a large growing proportion of Northern factory workers. Republicans often saw the Catholic working class as lacking the qualities of self-discipline, temperance, and sobriety essential for their vision of ordered liberty. Republicans insisted that there was a high correlation between education, religion, and hard work—the values of the "Protestant work ethic"—and Republican votes. "Where free schools are regarded as a nuisance, where religion is least honored and lazy unthrift is the rule," read an editorial of the pro-Republican Chicago Democratic Press after James Buchanan's defeat of John C. Fremont in the 1856 presidential election, "there Buchanan has received his strongest support."
Ethno-religious, socio-economic, and cultural fault lines ran throughout American society, but were becoming increasingly sectional, pitting Yankee Protestants with a stake in the emerging industrial capitalism and American nationalism increasingly against those tied to Southern slave holding interests. For example, acclaimed historian Don E. Fehrenbacher, in his Prelude to Greatness, Lincoln in the 1850s, noticed how Illinois was a microcosm of the national political scene, pointing out voting patterns that bore striking correlations to regional patterns of settlement. Those areas settled from the South were staunchly Democratic, while those by New Englanders were staunchly Republican. In addition, a belt of border counties were known for their political moderation, and traditionally held the balance of power. Intertwined with religious, ethnic, regional, and class identities, the issues of free labor and free soil were thus easy to play on.
Events during the next two years in "Bleeding Kansas" sustained the popular fervor originally aroused among some elements in the North by the Kansas-Nebraska Act. Free-State settlers from the North were encouraged by press and pulpit and the powerful organs of abolitionist propaganda. Often they received financial help from such organizations as the Massachusetts Emigrant Aid Company. Those from the South often received financial contributions from the communities they left. Southerners sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile and ruinous legislation."
While the Great Plains were largely unfit for the cultivation of cotton, informed Southerners demanded that the West be open to slavery, often—perhaps most often—with minerals in mind. Brazil, for instance, was an example of the successful use of slave labor in mining. In the middle of the eighteenth century, diamond mining supplemented gold mining in Minas Gerais and accounted for a massive transfer of masters and slaves from Brazil's northeastern sugar region. Southern leaders knew a good deal about this experience. It was even promoted in the pro-slavery DeBow's Review as far back as 1848.
In Kansas around 1855, the slavery issue reached a condition of intolerable tension and violence. But this was in an area where an overwhelming proportion of settlers were merely land-hungry Westerners indifferent to the public issues. The majority of the inhabitants were not concerned with sectional tensions or the issue of slavery. Instead, the tension in Kansas began as a contention between rival claimants. During the first wave of settlement, no one held titles to the land, and settlers rushed to occupy newly open land fit for cultivation. While the tension and violence did emerge as a pattern pitting Yankee and Missourian settlers against each other, there is little evidence of any ideological divides on the questions of slavery. Instead, the Missouri claimants, thinking of Kansas as their own domain, regarded the Yankee squatters as invaders, while the Yankees accused the Missourians for grabbing the best land without honestly settling on it.
However, the 1855-56 violence in "Bleeding Kansas" did reach an ideological climax after John Brown– regarded by followers as the instrument of God's will to destroy slavery– entered the melee. His assassination of five pro-slavery settlers (the so-called "Pottawatomie Massacre", during the night of May 24, 1856) resulted in some irregular, guerrilla-style strife. Aside from John Brown's fervor, the strife in Kansas often involved only armed bands more interested in land claims or loot.
Of greater importance than the civil strife in Kansas, however, was the reaction against it nationwide and in Congress. In both North and South, the belief was widespread that the aggressive designs of the other section were epitomized by (and responsible for) what was happening in Kansas. Consequently, "Bleeding Kansas" emerged as a symbol of sectional controversy.
Indignant over the developments in Kansas, the Republicans—the first entirely sectional major party in U.S. history—entered their first presidential campaign with confidence. Their nominee, John C. Frémont, was a generally safe candidate for the new party. Although his nomination upset some of their Nativist Know-Nothing supporters (his mother was a Catholic), the nomination of the famed explorer of the Far West with no political record was an attempt to woo ex-Democrats. The other two Republican contenders, William H. Seward and Salmon P. Chase, were seen as too radical.
Nevertheless, the campaign of 1856 was waged almost exclusively on the slavery issue—pitted as a struggle between democracy and aristocracy—focusing on the question of Kansas. The Republicans condemned the Kansas-Nebraska Act and the expansion of slavery, but they advanced a program of internal improvements combining the idealism of anti-slavery with the economic aspirations of the North. The new party rapidly developed a powerful partisan culture, and energetic activists drove voters to the polls in unprecedented numbers. People reacted with fervor. Young Republicans organized the "Wide Awake" clubs and chanted "Free Soil, Free Labor, Free Men, Frémont!" With Southern fire-eaters and even some moderates uttering threats of secession if Frémont won, the Democratic candidate, Buchanan, benefited from apprehensions about the future of the Union.
The Lecompton Constitution and Dred Scott v. Sandford were both part of the Bleeding Kansas controversy over slavery as a result of the Kansas Nebraska Act, which was Stephen Douglas' attempt at replacing the Missouri Compromise ban on slavery in the Kansas and Nebraska territories with popular sovereignty, which meant that the people of a territory could vote either for or against slavery. The Lecompton Constitution, which would have allowed slavery in Kansas, was the result of massive vote fraud by the pro-slavery Border Ruffians. Douglas defeated the Lecompton Constitution because it was supported by the minority of pro-slavery people in Kansas, and Douglas believed in majority rule. Douglas hoped that both South and North would support popular sovereignty, but the opposite was true. Neither side trusted Douglas.
The Supreme Court decision of 1857 in Dred Scott v. Sandford added to the controversy. Chief Justice Roger B. Taney's decision said that slaves were "so far inferior that they had no rights which the white man was bound to respect", and that slavery could spread into the territories even if the majority of people in the territories were anti-slavery. Lincoln warned that "the next Dred Scott decision" could threaten Northern states with slavery.
President James Buchanan decided to end the troubles in Kansas by urging Congress to admit Kansas as a slave state under the Lecompton Constitution. Kansas voters, however, soundly rejected this constitution— at least with a measure of widespread fraud on both sides— by more than 10,000 votes. As Buchanan directed his presidential authority to this goal, he further angered the Republicans and alienated members of his own party. Prompting their break with the administration, the Douglasites saw this scheme as an attempt to pervert the principle of popular sovereignty on which the Kansas-Nebraska Act was based. Nationwide, conservatives were incensed, feeling as though the principles of states' rights had been violated. Even in the South, ex-Whigs and border states Know-Nothings— most notably John Bell and John J. Crittenden (key figures in the event of sectional controversies)— urged the Republicans to oppose the administration's moves and take up the demand that the territories be given the power to accept or reject sovereignty.
As the schism in the Democratic party deepened, moderate Republicans argued that an alliance with anti-administration Democrats, especially Stephen Douglas, would be a key advantage in the 1860 elections. Some Republican observers saw the controversy over the Lecompton Constitution as an opportunity to peel off Democratic support in the border states, where Frémont picked up little support. After all, the border states had often gone for Whigs with a Northern base of support in the past without prompting threats of Southern withdrawal from the Union.
Among the proponents of this strategy was The New York Times, which called on the Republicans to downplay opposition to popular sovereignty in favor of a compromise policy calling for "no more slave states" in order to quell sectional tensions. The Times maintained that for the Republicans to be competitive in the 1860 elections, they would need to broaden their base of support to include all voters who for one reason or another were upset with the Buchanan Administration.
Indeed, pressure was strong for an alliance that would unite the growing opposition to the Democratic Administration. But such an alliance was no novel idea; it would essentially entail transforming the Republicans into the national, conservative, Union party of the country. In effect, this would be a successor to the Whig party.
Republican leaders, however, staunchly opposed any attempts to modify the party position on slavery, appalled by what they considered a surrender of their principles when, for example, all the ninety-two Republican members of Congress voted for the Crittenden-Montgomery bill in 1858. Although this compromise measure blocked Kansas' entry into the union as a slave state, the fact that it called for popular sovereignty, rather than outright opposition to the expansion of slavery, was troubling to the party leaders.
In the end, the Crittenden-Montgomery bill did not forge a grand anti-administration coalition of Republicans, ex-Whig Southerners in the border states, and Northern Democrats. Instead, the Democratic Party merely split along sectional lines. Anti-Lecompton Democrats complained that a new, pro-slavery test had been imposed upon the party. The Douglasites, however, refused to yield to administration pressure. Like the anti-Nebraska Democrats, who were now members of the Republican Party, the Douglasean insisted that they— not the administration— commanded the support of most northern Democrats.
Extremist sentiment in the South advanced dramatically as the Southern planter class saw its hold on the executive, legislative, and judicial apparatus of the central government wane. It also grew increasingly difficult for Southern Democrats to manipulate power in many of the Northern states through their allies in the Democratic Party.
Even before news of the Kansas skirmishes reached the East coast, a related violent escapade occurred in Washington on May 22. Charles Sumner's May 19 speech before the Senate entitled "The Crime Against Kansas", which condemned the Pierce Administration and the institution of slavery, singled out Senator Andrew P. Butler of South Carolina, a strident defender of slavery. Its markedly sexual innuendo cast the South Carolinian as the "Don Quixote" of slavery, who has "chosen a mistress [the harlot slavery]... who, though ugly to others, is always lovely to him, though polluted in the sight of the world is chaste in his sight." Three days later, Sumner fell victim to the Southern gentleman's-code, which instructed retaliation for impugning the honor of an elderly kinsman. Bleeding and unconscious after a nearly fatal assault with a heavy cane by Butler's nephew, U.S. Representative Preston Brooks– and unable to return to the Senate for three years– the Massachusetts senator emerged as another symbol of sectional tensions. During the beating, many Southern Senators formed a ring around the fight, as to prevent the Northern Senators from saving Sumner. For many in the North, he illustrated the barbarism of slave society; by contrast, Brooks was lauded as a hero by many Southerners, with dozens of his fellow South Carolinians sending him new canes, including one with the label "Hit him again".
Despite their significant loss in the election of 1856, Republican leaders realized that even though they appealed only to Northern voters, they need win only two more states, such as Pennsylvania and Illinois, to win the presidency in 1860.
As the Democrats were grappling with their own troubles, leaders in the Republican party fought to keep elected members focused on the issue of slavery in the West, which allowed them to mobilize popular support. Chase wrote Sumner that if the conservatives succeeded, it might be necessary to recreate the Free Soil Party. He was also particularly disturbed by the tendency of many Republicans to eschew moral attacks on slavery for political and economic arguments.
The controversy over slavery in the West was still not creating a fixation on the issue of slavery. Although the old restraints on the sectional tensions were being eroded with the rapid extension of mass politics and mass democracy in the North, the perpetuation of conflict over the issue of slavery in the West still required the efforts of radical Democrats in the South and radical Republicans in the North. They had to ensure that the sectional conflict would remain at the center of the political debate.
William Seward contemplated this potential in the 1840s, when the Democrats were the nation's majority party, usually controlling Congress, the presidency, and many state offices. The country's institutional structure and party system allowed slaveholders to prevail in more of the nation's territories and to garner a great deal of influence over national policy. With growing popular discontent with the unwillingness of many Democratic leaders to take a stand against slavery, and growing consciousness of the party's increasingly pro-Southern stance, Seward became convinced that the only way for the Whig Party to counteract the Democrats' strong monopoly of the rhetoric of democracy and equality was for the Whigs to embrace anti-slavery as a party platform. Once again, to increasing numbers of Northerners, the Southern labor system was increasingly seen as contrary to the ideals of American democracy.
Republicans believed in the existence of "the Slave Power Conspiracy," which had seized control of the federal government and was attempting to pervert the Constitution for its own purposes. The "Slave Power" idea gave the Republicans the anti-aristocratic appeal with which men like Seward had long wished to be associated politically. By fusing older anti-slavery arguments with the idea that slavery posed a threat to Northern free labor and democratic values, it enabled the Republicans to tap into the egalitarian outlook which lay at the heart of Northern society.
In this sense, during the 1860 presidential campaign, Republican orators even cast "Honest Abe" as an embodiment of these principles, repeatedly referring to him as "the child of labor" and "son of the frontier," who had proved how "honest industry and toil" were rewarded in the North. Although Lincoln had been a Whig, the "Wide Awakes" (members of the Republican clubs), used replicas of rails that he had split to remind voters of his humble origins.
In almost every northern state, organizers attempted to have a Republican Party or an anti-Nebraska fusion movement on ballots in 1854. In areas where the radical Republicans controlled the new organization, the comprehensive radical program became the party policy. Just as they helped organize the Republican Party in the summer of 1854, the radicals played an important role in the national organization of the party in 1856. Republican conventions in New York, Massachusetts, and Illinois adopted radical platforms. These radical platforms in such states as Wisconsin, Michigan, Maine, and Vermont usually called for the divorce of the government from slavery, the repeal of the Fugitive Slave Laws, and no more slave states, as did platforms in Pennsylvania, Minnesota, and Massachusetts when radical influence was high.
Conservatives at the Republican 1860 nominating convention in Chicago were able to block the nomination of William Seward, who had an earlier reputation as a radical (but by 1860 had been criticized by Horace Greeley as being too moderate). Other candidates had earlier joined or formed parties opposing the Whigs and had thereby made enemies of many delegates. Lincoln was selected on the third ballot. However, conservatives were unable to bring about the resurrection of "Whiggery." The convention's resolutions regarding slavery were roughly the same as they had been in 1856, but the language appeared less radical. In the following months, even Republican conservatives like Thomas Ewing and Edward Baker embraced the platform language that "the normal condition of territories was freedom". All in all, the organizers had done an effective job of shaping the official policy of the Republican Party.
Southern slave holding interests now faced the prospects of a Republican President and the entry of new free states that would alter the nation's balance of power between the sections. To many Southerners, the resounding defeat of the Lecompton Constitution foreshadowed the entry of more free states into the Union. Dating back to the Missouri Compromise, the Southern region desperately sought to maintain an equal balance of slave states and free states so as to be competitive in the Senate. Since the last slave state was admitted in 1845, five more free states had entered. The tradition of maintaining a balance between North and South was abandoned in favor of the addition of more free soil states.
The Lincoln-Douglas Debates were a series of seven debates in 1858 between Stephen Douglas, United States Senator from Illinois, and Abraham Lincoln, the Republican who sought to replace Douglas in the Senate. The debates were mainly about slavery. Douglas defended his Kansas Nebraska Act, which replaced the Missouri Compromise ban on slavery in the Louisiana Purchase territory north and west of Missouri with popular sovereignty, which allowed residents of territories such as the Kansas to vote either for or against slavery. Douglas put Lincoln on the defensive by accusing him of being a Black Republican abolitionist, but Lincoln responded by asking Douglas to reconcile popular sovereignty with the Dred Scott decision. Douglas' Freeport Doctrine was that residents of a territory could keep slavery out by refusing to pass a slave code and other laws needed to protect slavery. Douglas' Freeport Doctrine, and the fact that he helped defeat the pro-slavery Lecompton Constitution, made Douglas unpopular in the South, which led to the 1860 split of the Democratic Party into Northern and Southern wings. The Democrats retained control of the Illinois legislature, and Douglas thus retained his seat in the U.S. Senate (at that time United States Senators were elected by the state legislatures, not by popular vote); however, Lincoln's national profile was greatly raised, paving the way for his election as president of the United States two years later.
In The Rise of American Civilization (1927), Charles and Mary Beard argue that slavery was not so much a social or cultural institution as an economic one (a labor system). The Beards cited inherent conflicts between Northeastern finance, manufacturing, and commerce and Southern plantations, which competed to control the federal government so as to protect their own interests. According to the economic determinists of the era, both groups used arguments over slavery and states' rights as a cover.
Recent historians have rejected the Beardian thesis. But their economic determinism has influenced subsequent historians in important ways. Modernization theorists, such as Raimondo Luraghi, have argued that as the Industrial Revolution was expanding on a worldwide scale, the days of wrath were coming for a series of agrarian, pre-capitalistic, "backward" societies throughout the world, from the Italian and American South to India. But most American historians point out the South was highly developed and on average about as prosperous as the North.
A few historians believe that the serious financial panic of 1857– and the economic difficulties leading up to it– strengthened the Republican Party and heightened sectional tensions. Before the panic, strong economic growth was being achieved under relatively low tariffs. Hence much of the nation concentrated on growth and prosperity.
The iron and textile industries were facing acute, worsening trouble each year after 1850. By 1854, stocks of iron were accumulating in each world market. Iron prices fell, forcing many American iron mills to shut down.
Republicans urged western farmers and northern manufacturers to blame the depression on the domination of the low-tariff economic policies of southern-controlled Democratic administrations. However the depression revived suspicion of Northeastern banking interests in both the South and the West. Eastern demand for western farm products shifted the West closer to the North. As the "transportation revolution" (canals and railroads) went forward, an increasingly large share and absolute amount of wheat, corn, and other staples of western producers– once difficult to haul across the Appalachians– went to markets in the Northeast. The depression emphasized the value of the western markets for eastern goods and homesteaders who would furnish markets and respectable profits.
Aside from the land issue, economic difficulties strengthened the Republican case for higher tariffs for industries in response to the depression. This issue was important in Pennsylvania and perhaps New Jersey.
Meanwhile, many Southerners grumbled over "radical" notions of giving land away to farmers that would "abolitionize" the area. While the ideology of Southern sectionalism was well-developed before the Panic of 1857 by figures like J.D.B. DeBow, the panic helped convince even more cotton barons that they had grown too reliant on Eastern financial interests.
Thomas Prentice Kettell, former editor of the Democratic Review, was another commentator popular in the South to enjoy a great degree of prominence between 1857 and 1860. Kettell gathered an array of statistics in his book on Southern Wealth and Northern Profits, to show that the South produced vast wealth, while the North, with its dependence on raw materials, siphoned off the wealth of the South. Arguing that sectional inequality resulted from the concentration of manufacturing in the North, and from the North's supremacy in communications, transportation, finance, and international trade, his ideas paralleled old physiocratic doctrines that all profits of manufacturing and trade come out of the land. Political sociologists, such as Barrington Moore, have noted that these forms of romantic nostalgia tend to crop up whenever industrialization takes hold.
Such Southern hostility to the free farmers gave the North an opportunity for an alliance with Western farmers. After the political realignments of 1857-58—manifested by the emerging strength of the Republican Party and their networks of local support nationwide—almost every issue was entangled with the controversy over the expansion of slavery in the West. While questions of tariffs, banking policy, public land, and subsidies to railroads did not always unite all elements in the North and the Northwest against the interests of slaveholders in the South under the pre-1854 party system, they were translated in terms of sectional conflict—with the expansion of slavery in the West involved.
As the depression strengthened the Republican Party, slave holding interests were becoming convinced that the North had aggressive and hostile designs on the Southern way of life. The South was thus increasingly fertile ground for secessionism.
The Republicans' Whig-style personality-driven "hurrah" campaign helped stir hysteria in the slave states upon the emergence of Lincoln and intensify divisive tendencies, while Southern "fire eaters" gave credence to notions of the slave power conspiracy among Republican constituencies in the North and West. New Southern demands to re-open the African slave trade further fueled sectional tensions.
From the early 1840s until the outbreak of the Civil War, the cost of slaves had been rising steadily. Meanwhile, the price of cotton was experiencing market fluctuations typical of raw commodities. After the Panic of 1857, the price of cotton fell while the price of slaves continued its steep rise. At the 1858 Southern commercial convention, William L. Yancey of Alabama called for the reopening of the African slave trade. Only the delegates from the states of the Upper South, who profited from the domestic trade, opposed the reopening of the slave trade since they saw it as a potential form of competition. The convention in 1858 wound up voting to recommend the repeal of all laws against slave imports, despite some reservations.
On October 16, 1859, radical abolitionist John Brown led an attempt to start an armed slave revolt by seizing the U.S. Army arsenal at Harper's Ferry, Virginia (now West Virginia). Brown and twenty followers, both whites (including two of Brown's sons) and blacks (three free blacks, one freedman, and one fugitive slave), planned to seize the armory and use weapons stored there to arm black slaves in order to spark a general uprising by the slave population.
Although the raiders were initially successful in cutting the telegraph line and capturing the armory, they allowed a passing train to continue on to Washington, D.C., where the authorities were alerted to the attack. By October 17 the raiders were surrounded in the armory by the militia and other locals. Robert E. Lee (then a Colonel in the U.S. Army) led a company of U.S. Marines in storming the armory on October 18. Ten of the raiders were killed, including both of Brown's sons; Brown himself along with a half dozen of his followers were captured; four of the raiders escaped immediate capture. Six locals were killed and nine injured; the Marines suffered one dead and one injured. The local slave population failed to join in Brown's attack.
Brown was subsequently hanged for treason (against the Commonwealth of Virginia), as were six of his followers. The raid became a cause célèbre in both the North and the South, with Brown vilified by Southerners as a bloodthirsty fanatic, but celebrated by many Northern abolitionists as a martyr to the cause of freedom.
Initially, William H. Seward of New York, Salmon P. Chase of Ohio, and Simon Cameron of Pennsylvania, were the leading contenders for the Republican presidential nomination. But Abraham Lincoln, a former one-term House member who gained fame amid the Lincoln-Douglas Debates of 1858, had fewer political opponents within the party and out-maneuvered the other contenders. On May 16, 1860, he received the Republican nomination at their convention in Chicago, Illinois.
The schism in the Democratic Party over the Lecompton Constitution and Douglas' Freeport Doctrine caused Southern "fire-eaters" to oppose front runner Stephen A. Douglas' bid for the Democratic presidential nomination. Douglas defeated the proslavery Lecompton Constitution for Kansas because the majority of Kansans were antislavery, and Douglas' popular sovereignty doctrine would allow the majority to vote slavery up or down as they chose. Douglas' Freeport Doctrine alleged that the antislavery majority of Kansans could thwart the Dred Scott decision that allowed slavery by withholding legislation for a slave code and other laws needed to protect slavery. As a result, Southern extremists demanded a slave code for the territories, and used this issue to divide the northern and southern wings of the Democratic Party. Southerners left the party and in June nominated John C. Breckinridge, while Northern Democrats supported Douglas. As a result, the Southern planter class lost a considerable measure of sway in national politics. Because of the Democrats' division, the Republican nominee faced a divided opposition. Adding to Lincoln's advantage, ex-Whigs from the border states had earlier formed the Constitutional Union Party, nominating John C. Bell for President. Thus, party nominees waged regional campaigns. Douglas and Lincoln competed for Northern votes, while Bell, Douglas and Breckinridge competed for Southern votes.
"Vote yourself a farm– vote yourself a tariff" could have been a slogan for the Republicans in 1860. In sum, business was to support the farmers' demands for land (popular also in industrial working-class circles) in return for support for a higher tariff. To an extent, the elections of 1860 bolstered the political power of new social forces unleashed by the Industrial Revolution. In February 1861, after the seven states had departed the Union (four more would depart in April-May 1861; in late April, Maryland was unable to secede because it was put under martial law), Congress had a strong northern majority and passed the Morrill Tariff Act (signed by Buchanan), which increased duties and provided the government with funds needed for the war.
The Alabama extremist William Lowndes Yancey's demand for a federal slave code for the territories split the Democratic Party between North and South, which made the election of Lincoln possible. Yancey tried to make his demand for a slave code moderate enough to get Southern support and yet extreme enough to enrage Northerners and split the party. He demanded that the party support a slave code for the territories if later necessary, so that the demand would be conditional enough to win Southern support. His tactic worked, and lower South delegates left the Democratic Convention at Institute Hall in Charleston, South Carolina and walked over to Military Hall. The South Carolina extremist Robert Barnwell Rhett hoped that the lower South would completely break with the Northern Democrats and attend a separate convention at Richmond, Virginia, but lower South delegates gave the national Democrats one last chance at unification by going to the convention at Baltimore, Maryland before the split became permanent. The end result was that John C. Breckinridge became the candidate of the Southern Democrats, and Stephen Douglas became the candidate of the Northern Democrats.
Yancy's previous 1848 attempt at demanding a slave code for the territories was his Alabama Platform, which was in response to the Northern Wilmot Proviso attempt at banning slavery in territories conquered from Mexico. Both the Alabama Platform and the Wilmot Proviso failed, but Yancey learned to be less overtly radical in order to get more support. Southerners thought they were merely demanding equality, in that they wanted Southern property in slaves to get the same (or more) protection as Northern forms of property.
With the emergence of the Republicans as the nation's first major sectional party by the mid-1850s, politics became the stage on which sectional tensions were played out. Although much of the West– the focal point of sectional tensions– was unfit for cotton cultivation, Southern secessionists read the political fallout as a sign that their power in national politics was rapidly weakening. Before, the slave system had been buttressed to an extent by the Democratic Party, which was increasingly seen as representing a more pro-Southern position that unfairly permitted Southerners to prevail in the nation's territories and to dominate national policy before the Civil War. But Democrats suffered a significant reverse in the electoral realignment of the mid-1850s. 1860 was a critical election that marked a stark change in existing patterns of party loyalties among groups of voters; Abraham Lincoln's election was a watershed in the balance of power of competing national and parochial interests and affiliations.
Once the election returns were certain, a special South Carolina convention declared "that the Union now subsisting between South Carolina and other states under the name of the 'United States of America' is hereby dissolved", heralding the secession of six more cotton states by February, and the formation of an independent nation, the Confederate States of America. Both the outgoing Buchanan administration and the incoming Lincoln administration refused to recognize the legality of secession or the legitimacy of the Confederacy. After Lincoln called for troops, four border states (that lacked cotton) seceded.
Disputes over the route of a proposed transcontinental railroad affected the timing of the Kansas Nebraska Act. The timing of the completion of a railroad from Georgia to South Carolina also was important, in that it allowed influential Georgians to declare their support for secession in South Carolina at a crucial moment. South Carolina secessionists feared that if they seceded first, they would be as isolated as they were during the Nullification Crisis. Support from Georgians was quickly followed by support for secession in the same South Carolina state legislature that previously preferred a cooperationist approach, as opposed to separate state secession.
The Totten system of forts (including forts Sumter and Pickens) designed for coastal defense encouraged Anderson to move federal troops from Fort Moultrie to the more easily defended Fort Sumter in Charleston harbor, South Carolina. Likewise, Slemmer moved U.S. troops from Fort Barrancas to the more easily defended Fort Pickens in Florida. These troop movements were defensive from the Northern point of view, and acts of aggression from the Southern point of view. Also, an attempt to resupply Fort Sumter via the ship Star of the West was seen as an attack on a Southern owned fort by secessionists, and as an attempt to defend U.S. property from the Northern point of view.
The tariff issue is greatly exaggerated by Lost Cause historians. The tariff had been written and approved by the South, so it was mostly Northerners (especially in Pennsylvania) who complained about the low rates; some Southerners feared that eventually the North would have enough control it could raise the tariff at will.
As for states' rights, while a states' right of revolution mentioned in the Declaration of Independence was based on the inalienable equal rights of man, secessionists believed in a modified version of states' rights that was safe for slavery.
These issues were especially important in the lower South, where 47 percent of the population were slaves. The upper South, where 32 percent of the population were slaves, considered the Fort Sumter crisis --especially Lincoln's call for troops to march south to recapture it a cause for secession. The northernmost border slave states, where 13 percent of the population were slaves, did not secede.
Abraham Lincoln's rejection of the Crittenden Compromise, the failure to secure the ratification of the Corwin amendment in 1861, and the inability of the Washington Peace Conference of 1861 to provide an effective alternative to Crittenden and Corwin came together to prevent a compromise that is still debated by Civil War historians. Even as the war was going on, William Seward and James Buchanan were outlining a debate over the question of inevitability that would continue among historians.
Two competing explanations of the sectional tensions inflaming the nation emerged even before the war. Buchanan believed the sectional hostility to be the accidental, unnecessary work of self-interested or fanatical agitators. He also singled out the "fanaticism" of the Republican Party. Seward, on the other hand, believed there to be an irrepressible conflict between opposing and enduring forces.
The irrepressible conflict argument was the first to dominate historical discussion. In the first decades after the fighting, histories of the Civil War generally reflected the views of Northerners who had participated in the conflict. The war appeared to be a stark moral conflict in which the South was to blame, a conflict that arose as a result of the designs of slave power. Henry Wilson's History of The Rise and Fall of the Slave Power in America (1872-1877) is the foremost representative of this moral interpretation, which argued that Northerners had fought to preserve the union against the aggressive designs of "slave power." Later, in his seven-volume History of the United States from the Compromise of 1850 to the Civil War, (1893-1900), James Ford Rhodes identified slavery as the central—and virtually only—cause of the Civil War. The North and South had reached positions on the issue of slavery that were both irreconcilable and unalterable. The conflict had become inevitable.
But the idea that the war was avoidable did not gain ground among historians until the 1920s, when the "revisionists" began to offer new accounts of the prologue to the conflict. Revisionist historians, such as James G. Randall and Avery Craven, saw in the social and economic systems of the South no differences so fundamental as to require a war. Randall blamed the ineptitude of a "blundering generation" of leaders. He also saw slavery as essentially a benign institution, crumbling in the presence of 19th century tendencies. Craven, the other leading revisionist, placed more emphasis on the issue of slavery than Randall but argued roughly the same points. In The Coming of the Civil War (1942), Craven argued that slave laborers were not much worse off than Northern workers, that the institution was already on the road to ultimate extinction, and that the war could have been averted by skillful and responsible leaders in the tradition of Congressional statesmen Henry Clay and Daniel Webster. Two of the most important figures in U.S. politics in the first half of the 19th century, Clay and Webster, arguably in contrast to the 1850s generation of leaders, shared a predisposition to compromises marked by a passionate patriotic devotion to the Union.
But it is possible that the politicians of the 1850s were not inept. More recent studies have kept elements of the revisionist interpretation alive, emphasizing the role of political agitation (the efforts of Democratic politicians of the South and Republican politicians in the North to keep the sectional conflict at the center of the political debate). David Herbert Donald argued in 1960 that the politicians of the 1850s were not unusually inept but that they were operating in a society in which traditional restraints were being eroded in the face of the rapid extension of democracy. The stability of the two-party system kept the union together, but would collapse in the 1850s, thus reinforcing, rather than suppressing, sectional conflict.
Reinforcing this interpretation, political sociologists have pointed out that the stable functioning of a political democracy requires a setting in which parties represent broad coalitions of varying interests, and that peaceful resolution of social conflicts takes place most easily when the major parties share fundamental values. Before the 1850s, the second American two party system (competition between the Democrats and the Whigs) conformed to this pattern, largely because sectional ideologies and issues were kept out of politics to maintain cross-regional networks of political alliances. However, in the 1840s and 1850s, ideology made its way into the heart of the political system despite the best efforts of the conservative Whig Party and the Democratic Party to keep it out.
|“||(Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery– subordination to the superior race– is his natural and normal condition.||”|
In July 1863, as decisive campaigns were fought at Gettysburg and Vicksburg, Republican senator Charles Sumner re-dedicated his speech The Barbarism of Slavery and said that desire to preserve slavery was the sole cause of the war:
|“||[T]here are two apparent rudiments to this war. One is Slavery and the other is State Rights. But the latter is only a cover for the former. If Slavery were out of the way there would be no trouble from State Rights.
The war, then, is for Slavery, and nothing else. It is an insane attempt to vindicate by arms the lordship which had been already asserted in debate. With mad-cap audacity it seeks to install this Barbarism as the truest Civilization. Slavery is declared to be the "corner-stone" of the new edifice.
Lincoln's war goals were reactions to the war, as opposed to causes. Abraham Lincoln explained the nationalist goal as the preservation of the Union on August 22, 1862, one month before his preliminary Emancipation Proclamation:
|“||I would save the Union. I would save it the shortest way under the Constitution. The sooner the national authority can be restored; the nearer the Union will be "the Union as it was." ... My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that.... I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free.||”|
On March 4, 1865, Lincoln said in his Second Inaugural Address that slavery was the cause of the War:
|“||One-eighth of the whole population were colored slaves, not distributed generally over the Union, but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it.||”|
Most historians... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united. Beard oversimplified the controversies relating to federal economic policy, for neither section unanimously supported or opposed measures such as the protective tariff, appropriations for internal improvements, or the creation of a national banking system.... During the 1850s, Federal economic policy gave no substantial cause for southern disaffection, for policy was largely determined by pro-Southern Congresses and administrations. Finally, the characteristic posture of the conservative northeastern business community was far from anti-Southern. Most merchants, bankers, and manufacturers were outspoken in their hostility to antislavery agitation and eager for sectional compromise in order to maintain their profitable business connections with the South. The conclusion seems inescapable that if economic differences, real though they were, had been all that troubled relations between North and South, there would be no substantial basis for the idea of an irrepressible conflict. | http://www.thefullwiki.org/Origins_of_the_American_Civil_War | 13 |
58 | Despite the power and rigor of categorical logic, it lacks flexibility. Its methods really only apply to syllogisms in which each of the two premises can be translated into a standard-form categorical claim. For this reason, even an introductory treatment of logic calls for some discussion of modern symbolic logic. Truth-functional or propositional logic is the simplest part of symbolic logic, though you will find it both rigorous enough to let you carry out systematic proofs and broad enough to handle a wide range of ordinary arguments.
This chapter shows how to work with complex arrangements of individual sentences. You will use letters to represent sentences, and a few special symbols to represent the standard relations among sentences: roughly speaking, the relations that correspond to the English words "not," "and," "or," and "if-then." Truth tables and rules of proof show how the truth values of the individual claims determine the truth values of their compounds, and whether or not a given conclusion follows from a given set of premises.
1. Truth-functional logic is a precise and useful method for testing the validity of arguments.
Also called propositional or sentential logic, truth-functional logic is the logic of sentences.
It has applications as wide-ranging as set theory and the fundamental principles of computer science, as well as being useful for the examination of ordinary arguments.
Finally, the precision of truth-functional logic makes it a good introduction to nonmathematical symbolic systems.
2. The vocabulary of truth-functional logic consists of claim variables and truth-functional symbols.
Claim variables are capital letters that stand for claims.
In categorical logic, we sometimes used capital letters to represent terms (nouns and noun phrases). Keep those distinct from the same capital letters that now represent whole sentences.
Each claim variable stands for a complete sentence.
Every claim variable has a truth value.
We use T and F to represent the two possible truth values.
When the truth value of a claim is not known, we use a truth table to indicate all possibilities.
Thus, for a single variable P, we write:
Whatever truth value a claim has, its negation (contradictory claim) has the opposite value.
Using ~P to mean the negation of P, we produce the following truth table:
This truth table is a definition of negation.
~P is read "not-P." This is our first truth-functional symbol.
The remaining truth-functional symbols cover relations between two claims.
Each symbol corresponds, more or less, to an ordinary English word; but you will find the symbols clearer and more rigid than their ordinary-language counterparts.
Accordingly, each symbol receives a precise definition with a truth table, and never deviates from that definition.
A conjunction (indicated by "&") is a compound claim asserting both of the simpler claims contained in it.
More precisely, a conjunction is true if and only if both of the simpler claims are true. We write:
P Q P & Q
T T T
T F F
F T F
F F F
Notice that this truth table needs four lines, not two, to capture all the possible truth values of P and Q.
We often say there's only one way for a conjunction to come out true, but many ways for it to come out false.
As the ampersand (&) should indicate, "and" is the most common way to describe a conjunction in English. But:
Other English words translate just as well into &: "but," "although," "while," "even though."
"And" sometimes has connotations that & lacks. "I had dinner and went to bed" suggests that one thing happened before the other; the logical conjunction carries no such suggestion.
A disjunction (indicated by "v") is a compound claim asserting either or both of the simpler claims contained in it.
In rigorous language, we say that a disjunction is false if and only if both of the simpler claims are; and we write:
P Q P v Q
T T T
T F T
F T T
F F F
Aside from the different arrangement of truth values in the final column, this table is set up like the last one.
It's as hard to make a disjunction false as it is to make a conjunction true.
"Or" captures the core meaning of the wedge, "v," but:
"Or" sometimes means that both of the simpler claims can't be true—for example, "You may take the lottery prize in a lump sum or receive payments over twenty years." The logical disjunction never forces us to choose between the disjuncts.
Other English words, like "unless," also get translated into disjunctions.
A conditional claim (indicated by "→") is a compound claim asserting the second simpler claim on the condition that the first simpler claim is true.
To define a conditional more exactly, we first need to define its parts.
The claim before the arrow is the antecedent, the one after it the consequent.
A conditional claim is false if and only if its antecedent is true and its consequent is false. Or:
P Q P → Q
T T T
T F F
F T T
F F T
We read "P → Q" as "If P, then Q." In many cases the logical conditional will strike you as different from the ordinary English "if-then" construction.
The essence of our definition is that the conditional only must be false under one set of circumstances, when the antecedent sets up a promised condition and the consequent does not deliver on it.
In other cases we are not pressed to call the compound claim false. See "Commonly Asked Questions" below for further discussion.
3. Three rules permit you to construct a truth table for any well-formed combination of claim variables and symbols. Use parentheses when you need them; put enough rows in the table to capture all combinations of truth values; make columns of the parts of the expression.
Parentheses specify where a truth-functional operation is doing its work.
5 + 2 makes no sense in arithmetic. It must be written either (5 + ) 2, in which case it equals 16, or 5 + ( 2), in which case it equals 11.
Similarly, the symbols that link claim variables make no sense when strung together without separation, as in P & Q v R → S. Write (P & Q) v (R → S) or whatever you mean.
The truth table must capture all possible combinations of truth values for the individual sentences contained in the complex expression.
Remember that the truth table is designed to show all the conditions under which a given expression is true or false. Because each of its component claim variables is independent of each other one, the table must reflect every combination.
Make a column at the left of the table for each of the claim variables. These are the reference columns.
If you have n claim variables in an expression, you will need 2n rows: 2 variables require 4 rows, require 8 rows, 4 require 16 rows, and so on.
The rightmost column alternates Ts and Fs; the column just to its left goes T-T-F-F; the column to the left of that, T-T-T-T-F-F-F-F, and so on. The left-hand column is half all Ts and then half all Fs.
Here are the reference columns for a truth table built to handle three variables, P, Q, and R:
P Q R
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F
The truth table must contain columns for the parts of the final complex expression, if any of those parts is not a single claim variable.
For example, if you are building a truth table for the expression (P v Q) → R, you should first make a separate column for P v Q and determine its truth values.
You will then refer to those columns in calculating the truth values for the final expression.
4. Truth tables so constructed produce a truth-functional analysis of more complex claims, and show whether two claims are equivalent.
As an example, take the sentence "Either Peleg and Queequeg will both go harpooning, or Queequeg won't."
We first render it, with obvious symbols, (P & Q) v ~Q.
The truth table contains two claim variables and thus needs four rows; and it must have columns for P & Q and ~Q:
P & Q
(P & Q) v ~Q
Now we can say that the complex expression is false only in row , that is, only when P is false and Q is true.
When two expressions containing the same claim variables have identical columns in truth tables, we call them truth-functionally equivalent (see definition (1)).
You may think of equivalent expressions as claims that mean the same thing.
Consider the truth table for "If Queequeg goes harpooning, so will Peleg," symbolized Q → P:
P Q Q → P
T T T
T F T
F T F
F F T
This final column is identical to the final column for "Either Peleg and Queequeg will both go harpooning, or Queequeg won't." The two claims are truth-functionally equivalent.
5. The first significant work in analyzing and operating on claims with truth-functional logic is the work of translating them into symbolic form.
Ultimately there is no substitute for a careful examination of what the claims are saying.
Translating a compound claim into symbolic form means making its internal logical relations clear and precise.
Because ordinary language often gives us compounds with implied or submerged logical relations, we have to begin by making sure we know what they mean.
Especially with claims involving conditionals, a few rules speed up the process.
When "if" appears by itself, what follows is the antecedent of the conditional.
When "only if" appears as a phrase, what follows is the consequent of the conditional.
The placement of clauses in a sentence is not a reliable guide to their placement in a conditional. (Logical form often departs from grammatical form.)
Thus, "My car will run if you put gas in it" becomes (with "G" and "C" as our symbols) G →C.
"My car will run only if you put gas in it," on the other hand, translates as C → G.
"Provided," or "provided that," often introduces the antecedent of a conditional. "The car will run provided you put gas in it": G → C.
The expression "if and only if" goes peacefully into its logical form if we expand the claim it appears in, into a longer claim.
First, observe that "My car will run if and only if you put gas in it" may be rewritten, "My car will run if you put gas in it, and my car will run only if you put gas in it."
We have already symbolized those two compounds as G → C and C →G, respectively. It is child's play to link the parts with an ampersand and get (G →C) & (C →G).
Other sorts of conditional claims need to be inferred from the statement of necessary and sufficient conditions.
"Literacy is a necessary condition for college graduation" means that you must be literate to be graduated from college, though plenty of other things must be true as well.
We express this relationship by saying: If you are graduated from college, you are literate (G → L). Necessary conditions become the consequents of conditionals.
"Erudition is a sufficient condition for college graduation" means that if you have become erudite, you are guaranteed your graduation. That condition suffices.
We can say: If you are erudite, you will be graduated from college (E → G).
The word "unless," for all its subtleties, translates as "v."
In other complex English claims, the location of words like "either" and "if" shows how to group logical relations, and hence where to place parentheses.
"Either I will dance and sing or I will juggle" goes into logic as (D & S) v J, because the "either" and "or" tell you to put parentheses there.
By comparison, "I will dance and either sing or juggle" becomes D & (S v J). Not the same thing at all.
Along similar lines, "if" and "then" disambiguate claims that might otherwise be mistaken for one another. In "If I sing or yodel, then I'll get booed," we know to enclose the disjunction within parentheses: (S v Y) → B.
Compare: "I'll sing, or else, if I yodel, I'll get booed." This is written S v (Y → B).
6. Truth tables offer one method for testing an argument for validity. (We will also look at two others.)
This method builds from a single principle, the definition of validity.
Recall that for an argument to be valid, it must have a true conclusion whenever all its premises are true.
So, we enter all the premises of the argument, and its conclusion, in a truth table, and examine the rows in which all premises are true.
If the conclusion is true in all such rows, the argument is valid. If even one row exists in which all premises are true and the conclusion is false, the argument is invalid.
Sometimes all the premises of an argument cannot be true at once. (They contain a contradiction.) In that case the argument is still valid, for we have no rows in which all premises are true and the conclusion false.
When using this method, number the columns of your truth table and keep three sorts of columns distinct from one another.
First, on the left, are the reference columns, headed by single-letter claim variables. You use those to calculate all truth values.
Scattered among the other columns, you are likely to have columns for the parts of complex expressions. Do not confuse these with the premises and conclusions.
The columns for premises and conclusions are the only ones that matter.
7. A second method is known as the short truth-table method.
The short truth-table method is practically necessary.
Complete truth tables are tedious to fill out.
Moreover, the number of calculations of truth values means greater opportunities for small errors that can lead to a wrong answer.
The short truth-table method is a kind of indirect proof.
Rather than go directly to demonstrate an argument's validity, you work to see if it can possibly be invalid. If not, of course, it is valid.
An argument is invalid in case any circumstance exists in which all its premises are true and its conclusion false.
So the short method consists in trying to make the conclusion come out false and the premises true.
It is usually quickest to begin with the argument's conclusion, assigning the claim variables values that make that conclusion false and seeing what the values of the other variables must be.
Here is an argument:
~A v B
C → A ~B & D
For the conclusion to be false, C must be true. We begin:
A B C D
We want all the premises to be true. If C is true, then the second premise can be true only if A is true; thus:
A B C D
When A is true, naturally ~A is false. So the first premise can be true only on the assumption that B is true:
A B C D
T T T
What about the third premise, ~B & D? We want it to be true. But no matter what truth value we give to D, ~B is false and makes the third premise false.
We have failed to produce a set of circumstances that make the premises all true and the conclusion false; the argument is valid.
Some examples make it hard to use this method.
There might be too many ways to make premises come out all true and the conclusion false. Sometimes we simply have to consider several possibilities.
At other times, it is easier to begin with one premise. Assume that the premise is true, and carry out the argument to make all the other premises true as well, and the conclusion false.
8. The method of deduction is the third and most sophisticated way of demonstrating an argument's validity.
The method has disadvantages.
It can be cumbersome as a test of invalidity, because failing to arrive at a conclusion from a set of premises can mean either that the argument is invalid, or merely that we have not found a good proof for a valid argument.
Deduction also presupposes familiarity with a set of rules that guide you through the proof; these must be learned until they feel automatic, and learning the rules takes some time and energy.
However, deduction possesses the great advantage of exposing the logical relations at work in an argument.
Doing such a proof or derivation resembles thinking through an argument.
When using this process you learn not only that a conclusion is true when all the premises are, but also why.
Because deduction brings out the actual logical connections in an argument, it also makes excellent training in critical thinking.
A truth table is a (very slow) computer program that delivers an answer for any argument you put in: a machine.
Deduction, on the other hand, works like a tool and requires craft. So it leaves you more skilled than the truth tables can.
In every deduction, certain basic principles apply.
You begin with the set of premises and apply rules from Group I and Group II (see below) to them.
If applying a rule produces the conclusion, the deduction is complete. If not, the result becomes another line in the proof, which you can use as you go as if it were a premise.
When you produce a new line for the deduction, write (to the right of it) the lines you used in producing it, and the abbreviation for the rule you used. This is called the annotation for the deduction.
9. Elementary valid argument patterns constitute the first set of rules you must learn before carrying out a deduction (Group I rules). These apply only to whole lines of a deduction, not to the individual parts of lines.
Modus ponens (MP; rule 1) says that, given a conditional claim in one line of a deduction, and the antecedent to that conditional in another line, you can deduce the consequent:
(A & B) →C
~C → (A v ~B)
A & B
A v ~B
Note, in the above examples, that either antecedent or consequent may be more complex than a single letter.
Modus tollens (MT; rule 2) functions similarly. Given a conditional claim and the denial to its consequent, you can deduce the denial of the antecedent:
A → B ~B
The chain argument(CA; rule ), one of the easiest to remember, applies when the consequent of one conditional is the antecedent of another:
A →B B → C
A → C
Disjunctive arguments (DA; rule 4) let you infer one of two disjuncts in a disjunction, when you are given the negation of the other disjunct:
A v B ~A
The motivating idea is simple: Given two alternatives, and the denial of one of them, you take the other alternative.
Remember that an unnegated expression is itself the negation of the same expression with "~" in front of it. So, given ~(A & B) v C, and (A & B), you can conclude that C.
Because a conjunction asserts that both of its conjuncts are true, you can begin with any conjunction and derive either conjunct: This is simplification (SIM; rule 5).
A & B
A (or B)
Conjunction (CONJ; rule 6), as its name implies, takes any two separate lines of a deduction and joins them:
A & B
It is worth restating the obvious: The parts of this conjunction may be as complex as you like:
A → B C v D
(A → B) & (C v D)
Superficially similar to these rules about conjunction is the rule of addition (ADD; rule 7). Given any line in a deduction, you may create a disjunction that contains that line as one of its elements, and anything at all as the other one:
A v B
A v (B & C)
A v (C → ~B)
The constructive dilemma (CD; rule 8) begins with two conditional claims and the disjunction of their antecedents, and moves to the disjunction of their consequents:
A → B
C → D A v C
B v D
If at least one of the antecedents is given, at least one of the consequents can be inferred. That's all the rule says.
This rule of course relies on modus ponens (rule 1).
In a destructive dilemma (DD; rule 9) we have two conditionals again, but the disjunction of the negations of their consequents; we derive the disjunction of the negations of the antecedents:
A → B
C → D ~B v ~D
~A v ~C
As the constructive dilemma relied on modus ponens, the destructive dilemma relies on modus tollens (rule 2).
If at least one of the consequents is being denied, then at least one of the antecedents is being denied as well.
10. Truth-functional equivalences form the Group II set of rules (see definition (2)).
These rules work somewhat differently from the argument patterns that make up Group I rules.
Truth-functional equivalence means that two claims say exactly the same thing. We can therefore replace them with one another without changing the meaning of a claim.
Unlike Group I rules, which are rules of inference, rules of equivalence work equally well in both directions.
Also unlike those rules, which can only be applied to complete lines of a deduction, these let us replace any part of a line (part of a claim) with its equivalent.
Remember that you will still make annotations: Indicate the line you went to, and what you did to it.
Double negation (DN; rule 10) lets you remove two consecutive negation signs, or insert two such signs, anywhere:
A ↔ ~~A
Commutation(COM; rule 11) applies to conjunctions and disjunctions. The order of their elements does not matter:
(A & B) ↔ (B & A)
(A v B) ↔ (B v A)
This should remind you of the commutative laws of addition and multiplication in arithmetic (7 + 5 = 5 + 7).
Just as those laws do not apply to subtraction or division, so commutation here does not apply to conditionals.
According to the rule called implication (IMPL; rule 12), conditionals can be turned into disjunctions, and vice versa:
(A → B) ↔ (~A v B)
People sometimes find this one hard to remember, or even hard to believe. You may want to construct truth tables for A →B and ~A v B and see their equivalence.
Again, bear in mind that these equivalence rules work for more complicated expressions and for parts of claims:
A & (~B → C) ↔ A & (B v C)
(A & ~B) R C ↔~(A & ~B) v C
Contraposition (CONTR; rule 1) switches the antecedent and consequent of a conditional with the negations of one another:
A → B ↔ ~B → ~A
~(A & B) → (C v ~D) ↔ ~(C v ~D) → (A & B)
DeMorgan's Laws (DEM; rule 14) govern the negations of conjunctions and disjunctions:
~(A & B) ↔ (~A v ~B)
~(A v B) ↔ (~A & ~B)
Always remember to change the & to a v, or vice versa, when moving the ~ inside the parentheses.
One rule you might find hard to memorize, but that is worth keeping in mind, is exportation (EXP; rule 15):
(A → (B → C)) ↔ ((A & B) → C)
This is not as strange as it looks. When you say, "If I get invited to the party, then if it's on Friday night, I can go," you're naming two conditions that both must be met before you can go. You may as well say, "If I get invited to the party and it's on Friday night, I can go."
Note: This rule does not apply to a complex conditional of the form (A → B) → C.
Association (ASSOC; rule 16) tells you that strings of conjuncts or disjuncts may be grouped in any way:
(A & (B & C)) ↔ ((A & B) & C)
(A v (B v C)) ↔ ((A v B) v C)
Like commutation, this rule should remind you of the comparable rules of association in arithmetic.
All the signs must be the same for association to apply: You must have a string of letters all joined by & or v. (And it does not work for the conditional.)
The two versions of distribution (DIST; rule 17) let us handle combinations of conjunction and disjunction, as follows:
(A & (B v C)) ↔ ((A & B) v (A & C))
(A v (B & C)) ↔ ((A v B) & (A v C))
Note the symmetry of the two rules. Once you have learned one, you can turn it into the other by replacing every & with a v, every v with an &.
Also note that the thing outside the parentheses, which has a sign next to it, keeps that sign next to itself.
Finally, trivially, and obviously, we have the rules of tautology (TAUT; rule 18). No comment is necessary:
A ↔ (A v A)
A ↔ (A & A)
11. Along with these rules of deduction, the method of conditional proof (CP) offers a strategy for showing the truth of conditional claims.
The idea behind the strategy is this: If a set of premises supports the moves from A to B, then those premises show the truth of A → B.
The essence of the strategy is that A is not a premise given in the argument.
We add A as a hypothetical assumption.
Once we derive B (the consequent of the desired conditional) within the deduction, we may conclude that B follows from A, that is, that A → B.
The complications of conditional proof follow from the fact that this additional premise is not really given. It must be used to derive the needed conditional claim and then eliminated, or discharged.
Though the method may strike you as needlessly complex, it actually turns very hard arguments into manageable ones.
The method of conditional proof consists of a few new steps:
We begin by writing down the assumed premise, the antecedent of the desired conditional.
We circle the number of that step.
The annotation reads, "CP Premise."
We continue through the deduction until we reach the consequent of the desired conditional.
In the next line, we state the conditional that unites the CP premise with the consequent.
We draw a line to the left, connecting the CP premise with the consequent.
The annotation lists all the steps bracketed by that line (e.g., "2–5"), and gives CP as the rule.
In addition to the steps just listed, be sure to follow certain rules for conditional proofs:
They only prove the truth of a conditional claim.
If you use more than one CP premise to reach a single final conditional, discharge the premises in reverse order from their assumption. (Lines on the left don't cross.)
Once a premise has been discharged, none of the steps bracketed by the line may appear again in the deduction.
Discharge all CP premises.
12. Truth-functional logic as defined in this chapter is a formal system with two properties of great interest to philosophers and logicians.
Truth-functional logic is sound.
The soundness of a system means that all proofs following the rules of that system will be valid arguments.
If you apply this chapter's rules properly in your deductions, they will all produce valid arguments; from true premises you are guaranteed to reach true conclusions.
Even more remarkably, truth-functional logic is complete.
The completeness of a system means that every valid inference in the system can be produced within that system.
If conclusion C follows from a set of premises P1, P2, etc., then it can be shown to follow. | http://highered.mcgraw-hill.com/sites/007312625x/student_view0/chapter9/ | 13 |
43 | discussing important and controversial issues of war and peace
good teacher knows that the best way to help students learn is to allow
them to find the truth by themselves.”
teacher’s first task in approaching any controversial subject
is to help students develop a solid knowledge base from which to form
"Peace is a state in which conflicts occur frequently
and are resolved constructively (war, in contrast, is a state in which
conflicts are managed through the use of large-scale violence). Conflicts
should occur frequently, because when they are managed constructively
they have many positive outcomes, such as increasing the motivation
and energy to solve problems, increasing achievement and productivity,
clarifying one's identity and values, and increasing one's understanding
of other perspectives."
David Johnson and Roger Johnson
Why and How to Plan Classroom Debates:
• Allowing students
to research both sides of an issue and then debate and debrief awakens
their critical thinking in a powerful way. Further, debate formats provide
a safe environment in which to discuss controversial issues.
students to switch and re-debate on the opposing side, strengthens their
command of the material and their ability to take multiple perspectives.
(See research by Avery, Johnson,Johnson, and Mitchell in How Children
Understand War and Peace and description by Lickona in Educating
for Character, and see suggestions below.)
• In addition,
devoting time for students to propose “solutions” to controversial
problem, gives them practice in working with those who hold opposing viewpoints
to face tough issues.
• Debates need not be time consuming, but are better when issues
are fully researched. (Minimum: one class or one night of research prior
to class debate; one period for debate.)
• Asking students
to interview family members engages families and students. Family interviews
help families feel that the instruction is fair and helps students feel
safe about joining in the class discussion.
• Having students
watch political debates is a way of encouraging their interest in civil
dialogue as well as helping them think critically about candidates and
the positions they take on important issues.
• As the teacher, share your opinion thoughtfully if you wish and
if you are asked, while maintaining a safe, respectful, open dialogue.
The teacher's role as a fair moderator is crucial to the success of debate
as a method of instruction.
Return to top of page.
Middle School Debate Class:
Students will choose controversial topics in current events,
research both sides, and work with a team to prepare arguments for formal
debates. The focus will be on critical thinking, active listening, and
would we have been if everyone had thought things out in those days?”
Adolph Eichmann at his trial for Nazi war crimes
was there to follow orders, not to think.”
Defendant in Watergate trials
(From Educating for Character, Lickona)
Why debate? To learn how to think!
A. Debate, the art of reasonable discussion of controversial topics,
will help us all become morally aware and will enhance our ability
to think critically. We will be better able take the perspective of
others, and make thoughtful moral decisions for ourselves. We may
often disagree—but we will learn to do so respectfully and civilly.
Where to begin?
A. Keep your ears, eyes, and mind open for interesting topics for
B. Talk to family members and friends about issues of importance.
C. Start a file of articles, or make a list of topics that interest
D. Introductory classes will get us into debating with mini-debates
on topics we already know a lot about (e.g.: “Cats are better
than dogs as pets for city dwellers.”).
E. We will also view videos of competitions of debate clubs and political
debates. For more information on the rules of debate, visit some of
the websites listed below!
How to debate? Become aware, select, research, and plan your debate!
A. Become aware: Individually and as a group we will use newspapers,
magazines, and websites to select topics of interest.
--- 1. Keep track of current events by reading headlines, scanning
news sources like
other sites through School Library Website.
---2. Discuss issues of importance with friends and family members.
B. Select: When a group of students has agreed on a topic of interest,
we will determine the central question or topic to debate.
---1. Resolve: The statement that is debated is called the resolve.
It is like a thesis in an essay: a statement containing an opinion
that is debatable.
C. Research: During this phase, you are trying to get as much information
as possible on BOTH SIDES of the topic. You do not yet know which
side your team will debate, so you need to understand the big picture
and as many issues as possible that can be used as arguments for or
D. Plan your debate:
---1. Choose sides: Sometimes this is done by choice, often by a coin
---2. Continue research! Use encyclopedias, news sources, personal
interviews, and other tools to accumulate an overwhelming amount of
evidence to support your argument.
---3. Prepare your opening: When you know whether you are debating
the affirmative (pro) or the negative (con), you and your teammates
need to fine tune your arguments and begin to prepare your first speech.
---4. Debate: There is a fixed order to speaking and questioning during
debates. Speak clearly, looking at your audience and opponents. Do
not simply read your prepared remarks. Introduce your teammates, your
argument, make your points, and sum up. As you listen to speakers,
make notes! You will use these notes as you prepare questions and
new points to argue.
A. Resolve: A statement (e.g. “Cats are better than dogs as
pets.”) to be argued in a debate.
B. Definitions: The affirmative side defines words used in the resolve
(e.g. "Pets in this case means animals to be kept in apartments
C. Counter-plan: A new argument that concedes some of the points made
(“Cats may be better than dogs in some situations, but dogs
can be trained as helping animals.”)
D. Conceding a point: Allowing the other side to have a point, but
making a new one “on top” of it. ("Well, cats may
be better in terms of size, but hauling all that kitty litter up to
the apartment is going to be a pain. That's why a small dog would
E. Rebuttal: The final argument, in which each team tries to sum up
for the judges the reason their team has proved (affirmative) or disproved
(negative) the resolve.
F. Courtesy and respect: These qualities are to be shown at all times
during the debate process.
OPENING DEBATE Exercise:
with which students are familiar to give students practice in debate.
After debating the merits of cats versus dogs as pets, move into the classroom
and school environment. Then, ask students to generate their own debate
Class is debating these resolves:
Brainstorm arguments for and against each statement:
1. School unity
promoted by school uniforms outweighs
personal freedom in dress.
2. The present
discipline system provides clear guidelines for
3. The current
sports schedule meets the needs of students appropriately.
other topics you would like Debate Class to investigate and debate.
Return to top of page.
v. Negative: _______________________________________
Format of this Debate: 10-minute debate
1. A coin toss will decide the choice of sides, followed by a five-minute
preparation period before the debate begins. Remember, you need to consider
arguments and critiques for both sides of the proposal, even though you
are arguing for only one side.
2. First Affirmative Speech is to be no longer than one minute in length.
(Affirmative defines terms used in resolve.)
3. Questioning by Negative team will last for one minute.
4. First Negative Speech is to be no longer than one minute in length.
5. Questioning by Affirmative team will last for one minute.
6. Three minutes to prepare final argument: Rebuttal Speech
7. Negative Rebuttal Speech 1 minute
8. Affirmative Rebuttal Speech 1 minute (note: in the Rebuttal, the speaker
may not introduce new material that has not already been mentioned in
9. Critique of Debate and Decision of Judges
• Your speeches should have logical organization and flow smoothly.
• Your team should show respect and courtesy at all times.
• Begin each speech by introducing yourself and your teammates.
• Take notes during all speeches and Q&A sessions to prepare
your questions and speeches.
• Support your arguments with convincing evidence or detail.
• Aim to cast doubt on the opposing argument: point out the flaws
and inconsistencies in the opponents’ arguments. Draw parallels
to other situations. Offer counter-plans.
• A successful debater must consider both sides of an argument!
Concede one point to better dispute another.
DEBATE CHART (for notes and questions):
Return to top of page.
for Other Activities
Researchers Patricia G. Avery, David W. Johnson, Roger T. Johnson, and
James M. Mitchell of the University of Minnesota recommend taking structured
debates a step further. In How Children Understand War and Peace,
they articulate numerous methods for staging academic controversies (structured
debates) in a safe and well-managed classroom. They urge teachers to allow
students to switch positions during the debate process, and to argue the
opposite point of view. Further, they suggest that teachers ask student
debaters to drop the pretense of debate and end the process by collaborating
on a group set of proposals to remedy the situation debated. (See Using
Structured Controversies Link for a consise set of directions.)
BOWL: As a change, or when time does not permit a formal debate,
invite two students to begin a discussion on a topic of their choosing.
The rest of the class listens to the discussion, and one-by-one "taps"
into the discussion if they have something new to contribute. Similar
to asking, "May I have this dance?" one new member joins the
pair while one of the fishbowl team departs. Short or long, this is an
excellent way for students to share their thoughts. Monitor participation
so that one or two dominant personalities do not monopolize the fishbowl.
LETTERS TO THE EDITOR: As a culminating activity, I often
ask debate participants to write a letter to the editor of a newspaper
that might take interest in the issue debated. Participants are well qualified
to lay out both sides of an issue, and I ask them to make concrete and
constructive suggestions to resolve a conflict, whether it be one over
school uniforms or on the invasion of Iraq by American troops.
Courtroom Dramas: In movies and in role plays: Watching
excerpts of excellent acting and arguing often provides a risk-free way
to engage in discussion of an issue. Examples of excellent movies include
To Kill a Mockingbird, Separate but Equal, and Armistad
with courtroom arguments that are both dramatic and articulate on issues
of race. I have also had children role play courtroom dramas based on
characters from books we were reading. Further, I have had students stage
formal debates on the merits of including a book in our curriculum by
having them take the role of curriculum committee members. They were covered
by the school newspaper, and members of the administration attended the
Return to top of page. | http://teachforpeace.org/DEBATE-peace/debatepeaceintro.htm | 13 |
28 | Introduction to Functions
In the expressions we have created so far, we were using operators, constants, and values we knew already. In some complex expressions, just the known operators and the values in cells will not be enough. An alternative is to use a function.
A function is a small assignment that is performed to produce a result that can be reliably used. There are two types of functions you can use: those you create and those that are already available. In our lessons, we will not create our own functions. We will only use those that are already installed in Microsoft Excel. The already available functions are referred to as built-in functions.
The built-in functions were created by Microsoft and they are available from the time you finish installing Microsoft Excel. You can reliably use them without being concerned with how they were created or how they work.
It is like when you pick up a TV remote control and press a button to change the channel. You donít care how the remote control works and you donít spend any time finding out why the channel changed.
As in real world where we use various functions on cars, TV, food eating, etc, in the computer world, various functions are made available so you can simply use them to do your job. As a spreadsheet application, Microsoft Excel is equipped with various functions that can solve different types of calculations.
In order to use a function more effectively, you should first know whether it is available and what you need to do to make it work. If you were creating a function, you would start its structure as follows:
Function End Function
The area between Function and End Function is referred to as the body of the function. That's where you would perform the necessary assignment of the function.
A function must have a name. Following our formula, you would specify the name after Function:
Function Name End Function
As mentioned already, in our lessons, we will use only the existing functions that were installed with Microsoft Excel. To start using a function, you would click the cell where you want to see the result. If you know the name of the function you want to use, after clicking the cell, type = followed the name of the function. After you type the first character of a function, Microsoft Excel would display an alphabetical list of the functions that start with that character:
You can keep typing the name of the function and as you type, Microsoft Excel would narrow the list of names that match the first characters you had type. Otherwise, if you see the name of the function you want in the list, you can double-click it. The function would be selected and written in the cell.
If you don't know or don't remember the name of the function that would do what you want, Microsoft Excel provides all the necessary tools and functionality to assist you.
To see a list of the available functions, on the Ribbon, click Formulas:
The functions are listed by category. To see the list of functions in a category, click the Financial, the Logical, the Text, the Date & Time, the Lookup & Reference, or the Math & Trig button. When you click, a list would appear. Here is an example:
After clicking one of those buttons, if you see the function you want to use, click it. If the function does not appear, you can click the More Functions button. This buttons holds four other categories of functions. After clicking the button, it displays a menu. You can position the mouse on one to view its list:
On the Ribbon, the AutoSum function holds a list of the most common algebraic functions:
While the buttons show the functions in their respective categories, you can see all of the functions in one list. In fact, another way to look for a function is by using the Insert Function dialog box. To access it, in the Function Library section of the Ribbon:
This would display the Insert Function dialog box:
As described previous for the Ribbon, the functions are organized in categories in the middle combo box of the Insert Function dialog box. To select a category, click the arrow of that combo box and select. The functions of the selected category would appear in the Select A Function list box. One of the options in the combo box is All. If you select it, all functions would appear in the Select A Function list box. After selecting the desired function, you can click OK.
As its name implies, the Recently Used button holds a list of the functions you most previously used.
Instead of using the Ribbon or the Insert Function dialog box to select a function, if you already know the name of the function you want to use, you can directly type it where appropriate. Although the functions in Microsoft Excel are not case-sensitive, it is a good idea to write them in uppercase.
We saw that, if you were creating a function, you would start it as follows:
Function Name End Function
We mentioned that the section between the Function Name line and the End Function line is referred to as the body of the function. This is where you would do describe the purpose of the function. Different functions are meant for different purposes. For example, when you press the power button on a TV remote control, the TV gets turned ON or OFF depending on whether it was already ON or OFF. Therefore, the purpose of the power button (that is, its function) is to turn the TV ON or OFF and vice versa.
To carry its assignment, a function may need one ore more external values. This external value is called an argument. While one function can use one argument, another function may need more than one argument. The purpose who creates a function decides how many arguments the function would need, based on what he or she wants the function to do.
We saw already that, if you are working manually, after clicking a cell, you can type = followed by the name of the function. The arguments of a function are provided in parentheses. Therefore, after typing = followed by the name of the function, type an opening parenthesis "(". If the function doesn't take any argument, type the closing parenthesis and click the Enter button or press Enter:
If the function is taking one argument, after the opening parenthesis, you can type its value:
If the value is held in a cell, you can click the cell that holds that value:
If the function takes more than one argument, type a comma, followed by the next argument that you can type or select from another cell or a group of cells, depending on the function.
After selection a function from the Ribbon or from the Insert Function dialog box as we described earlier, a dialog box named the would open:
The purpose of this dialog box is to assist you with specifying the arguments of the function you selected. In the top section, this dialog box displays one or more text box in a group box whose label is the name of the function you selected. Each text box is preceded by a label that displays the name of the argument.
If you know the value of the argument you want to use, you can type it. If you know the name or address of the cell or the group of cells that holds the value you want to use, you can type the name of that cell, the range of the cells, or the name of the group of cells, in the appropriate text box. Otherwise, to assist you with the value of an argument, a text box may display a selection button on its right side. If you click that button, the Function Arguments dialog box would be minimized to give you access to the worksheet:
You can then select the necessary cell or the group of cells. After making the selection, click the stop selection button . This would bring back the Function Arguments dialog box in its full display. If the function takes more than one argument, specify the value in each text box.
On a function that takes one argument, the argument may be required. In this case, you must provide it. If you don't, the function will not work (the result would be an error). If a function takes more than one argument, all arguments may be required. In this case, if you fail to provide all of them, the function would not work. In the Function Arguments dialog box, the labels of the required arguments are in bold characters.
In a function, an argument may not be required. In this case, if you donít provide the argument, the function would use its own value, called a default argument. Another function that takes more than one argument may not require all of them. There are even cases when a function takes many arguments but none of them is required. When an argument is not required, you donít have to supply it. If you donít, then the function would use a default value for that particular function.
If you are manually typing a function, if it takes one argument and the argument is optional, leave the parentheses empty. If the function is taking more than one argument and one or more arguments is (are) optional, after the opening parenthesis or the comma that separates it from the left argument, you can leave the placeholder empty, then continue with the rest of the arguments. Here is an example:
Notice the empty space for the fourth argument.
In the Function Arguments dialog box, the labels of the non-required arguments are in normal characters (not bold):
The person who creates a function also decides on the number of its arguments, whether the argument(s) is/are required and, if the function takes more than one argument, which ones are required, whether all of them are required or none of them is required.
After specifying the arguments, click OK.
We mentioned that you could directly type the name of a function and its arguments or you could click OK after using the Function Arguments dialog box. If everything went alright, you should see the result in the spreadsheet. If something went wrong, an error message would let you know.
The result that displays is called the return value of the function. Of course, since there are various types of functions, different functions produce different types of results. For example, while one function would produce a string, another function can produce a number.
The SUM function is used to get the addition of various numbers or the contents of various cells. The result can be displayed in another cell or used in an expression.
On the Ribbon, in the Home tab, the Editing section is equipped with a button called the AutoSum
There are two most primary ways of using the AutoSum. You can click an empty contiguous cell, and then click the AutoSum button . Before performing the SUM function, the computer will ask whether it found the right cells that you want to get the sum of. If the computer found the right cells, you can press Enter; otherwise use your mouse or your keyboard to select the cells you want to consider. You can also select the cells involved in a sum plus an empty cell that will be used to display the result, and then click the AUTOSUM button.
The decimal numeric system counts from minus infinity (-∞) to infinity (+∞). This means that a number can be usually negative or positive, depending on its position from 0, which is considered as neutral. In some operations, the number considered will need to be only positive even if it is provided in a negative format.
The absolute value of a number x is x if the number is (already) positive. If the number is negative, then its absolute value is its positive equivalent. For example, the absolute value of 12 is 12, while the absolute value of Ė12 is 12.
To get the absolute value of a number, you can use one of the ABS() function. Its syntax is:
Function ABS(number) As Number
This function takes one argument. The argument must be a number or an expression convertible to a number:
|Previous||Copyright © 2002-2008 FunctionX||Next| | http://functionx.com/excel/functions/introduction.htm | 13 |
62 | A-level Critical Thinking
||A reader requests expansion of this book to include more material.
You can help by adding new material (learn how) or ask for assistance in the reading room.
CRITICAL THINKING REVISION NOTES
Credibility of evidence
- Argument: A proposal/conclusion supported by a reason or reasons.
- Evidence: Information that supports an argument.
- Credibility: The believability of information.*
Source: Where information comes from e.g. a newspaper or a Website.
- Truth – Something that is correct
- Neutrality – A neutral source is impartial and does not take sides. The neutral source does not favour one point of view over another. Neutral sources are generally seen as more reliable.
- Vested Interests – A person or organisation has a vested interest if they have seething to gain from supporting a particular point of view. This can cause a person or organisation to lie, tell the truth, distort evidence or present one-sided evidence. Vested interests can increase or decrease the credibility of a source. Vested interests do not necessarily mean that a source will be biased.
- Bias – Bias is a lack of impartiality. Biased sources favour a particular point of view. It has been argued that an unbiased source is impossible as everyone has a particular viewpoint
1. Propaganda 2. Bias can be seen in the selective use of language 3. Cultural bias – Ethnocentrism
- Expertise – Expertise is specialist knowledge in a particular field. Experts are only regarded as knowledgeable in their own particular field.
·Experts disagree. ·Experts have made incorrect judgements . ·Some have argued expertise is harmful. (e.g. medicine) ·Expertise changes over time.
- Reputation – Reputation is the regard in which a person or organisation is held. People can have good or bad reputations based upon their character, organisations can have reputations because of their actions. Newspapers can also have a reputation for quality and accuracy.
- Observation- Eyewitness accounts are direct evidence. Evidence from those that saw an event firsthand .
Observations are affected by:
·Senses – short-sightedness would affect an eye-witness account. ·Memory – eye-witness accounts can be poor a long time after an event because memory can fade. ·Bias – Prejudice can distort an observation. ·Prior knowledge – Expertise can affect the way that an eye-witness account is told.
- Corroboration – When more that one source of evidence supports the same conclusion. The evidence “points in the same direction”.
- Selectivity – How representative information or evidence is. Surveys can be unrepresentative in terms of size and the type of people that they survey. To be neutral selected information should be representative of all of the information available.
- Context – The setting in which information has been collected (e.g. a war-zone)
·The historic context – Attitudes can change over a period of time. ·The scientific context – The response to new scientific ideas if affected by what already known (e.g. Darwinism initially discredited). ·The journalistic context – Embedded reporters in a war zone – how accurate can they be? ·Interview context – People respond differently to different interviewers. ·Linguistic context – Language can affect the type of answers people give.
- Credibility criteria: Criteria used to assess how believable a source of information is
1.Neutrality – How impartial a source of information is (biased or not). 2.Vested Interest – When a person or organisation have something to gain from supporting a point of view. 3.Expertise – Where the writer of information has specialist subject knowledge in a particular area. 4.Reputation – The regard in which a person of organisation is held in, based on their track record and their status. 5.Observation – A report from someone who directly perceived (heard, saw, felt) an event – an eyewitness account. 6.Circumstantial evidence - Physical evidence supporting the conclusion. 7.Corroboration – Where more than one source of evidence supports the same conclusion. 8.Selectivity – A measure of how representative information is compared with all of the information available. 9.Context – The situation in which information is collected.
An easy, quick way of remembering the main credibility criteria:
- R eputation
- A bility to perceive
- V ested interest
- E xpertise
- N eutrality / bias
- C orroboration
- C onsistency
Sources and types of evidence
- Primary sources – sources from the time or period of study
- Secondary sources – sources not from the time of study. These can include books with primary sources that have been processed and analysed by historians.
- Primary evidence –new evidence collected as part of research
- Secondary evidence – all other evidence such as government statistics.
- First-hand evidence – eyewitness accounts from those who have directly observed an event.
- Second hand-evidence – hearsay, evidence from those that have heard an account.
- Direct evidence – eyewitness evidence .
- Circumstantial evidence - non-direct evidence from which something can be inferred .
- Statistics – can they be trusted?
- Participant observation – does it affect behaviour?
- Interviews – Does interview style affect the responses gained?
- Questionnaires – Are they representative and will people be honest ?
Making a reasoned judgement
- Corroboration and conflict
- Balance of evidence
- Weight of evidence
- Quality of evidence
- Credibility criteria
- Argument = Reason + Conclusion
- Argument – The presentation of one or more reasons to support a conclusion.
- Reason – A claim that supports a conclusion.
- Conclusion – A claim that is supported by one or more reasons.
- Argument indicator - A word which links a reason and a conclusion.
Arguments are often presented without using reason indicators. Reasons are different to evidence – evidence supports a reason.
The reasons and conclusions separated by an inference bar Conclusions can be identified by conclusion indicators (e.g. Therefore).
Some arguments have an intermediate conclusion – a conclusion before the main conclusion is stated.
Types of reasoning
- Simple reasoning – There is a conclusion that is supported by a reason.
- side by side reasoning – Two reasons independent of each other support a conclusion.
- Joint reasoning – Two reasons from which one conclusion can be drawn. It would not be possible to draw a conclusion from one of the reasons on its own.
- Chain reasoning – Reasoning linked together.
- Joint reasons: Reasons which have been used together to support a conclusion.
- Assumption: An Assumption is an unstated reason
- Principal: A principal is a general statement about how something should be. There are moral, legal and ethical principals. Principals are inflexible and cannot be bend to fit particular situations.
- Counter argument: A counter argument is an argument that opposes another argument. Counter arguments can be included in an argument in order to dismiss that argument .
- Counter claims: Counter claims or counter assertions are claims that are dismissed in an argument. The claims to not agree with the main conclusion.
- A counter-example is an example that challenges the truth of a claim. Counter-examples challenge generalizations.
- Hypothetical reasoning - Something will happen on the condition that something else happens .
- Value judgements – A judgement based upon a value (e.g. murder is wrong). People have conflicting values and values change over time.
- Definitions – Definitions are precise meanings of a word or phrase. Definitions can be argued other e.g. the definition of rape.
- Causal explanations – Cause and effect (Smoking causes cancer).
- A common cause – A correlation between two things is caused by a third factor.
A correlation between two things may be caused by chance .
- Direction of causation – Does A cause B or does B cause A.
Some things have multi-causal explanations – explanations which show that there are more than one causes causing an effect.
An analogy is a comparison between two things which are seen to be similar .
Criteria to evaluate an argument by analogy 1.Number of instances 2.Number of similarities 3.Strength of conclusion 4.Relevance 5.Number of differences
Extreme analogies should be avoided as they can weaken an argument
Deductive and Inductive reasoning
In a deductive the conclusion is deductively valid. If the conclusion is not guaranteed to follow from the reasons then the argument is invalid and the argument ceases to be a deductive reasoning.
Inductive arguments: Where is the reasons are true then the conclusion will probably be true. (Chelsea are 12pts ahead in the Premiership with only a few games to go it its likely that they will win the league).
In a strong argument reasons are only relevant if they make a difference to the conclusion.
Deductive reasoning: There the conclusion is guaranteed to follow from the reasons.
Inductive reasoning: If the reasons are true then it is likely that the conclusion is true.
Often there s a lot of evidence in arguments to support reasons which support conclusion.
Flaws in Arguements
Appeal to Tradition
"we've always done it this way" Arguing that because something has always been done in one way in the past,it should continue to be done that way.
Appeal to Popularity
"Everyone likes them" Arguing that something must be the case or true or good because many people engage in an idea or activity.
Appeal to History
"If something has happened before, it will happen again." Arguing that what has happened in the past is always a guide to the future and/or the past will repeat itself.
Appeal to Emotion
"These poor puppies have been abandoned and you could give them the loving home they so desperately need." Arguing through tugging at peoples emotions rather than through logical reasoning/arguement.
Appeal to Authority
Trying to persuade a reader to accept an arguement based on the respect for authority rather than logic.
"either or" Reducing an arguement to only two extreme options when there are other possibilities.
Restricting the Options
E.g" We blindfold him or we knock him out....or you just let your fiancee your wedding dress." Presents a limited picture of choices available in a situation in order to support one particular option.
Confusing Necessary and Sufficient Conditions
E.g "I have done everything necessary,registered and trained for the race. But is it sufficient/enough for me to win the race?" Necessary conditions are conditions which must be fulfilled in order for an event to happen. Sufficient conditions are conditions which, if fulfilled, guarantee that an event will happen. Some people confuse necessary and sufficient conditions.
Drawing a general conclusion from insufficient evidence/limited examples.
Putting two or more things together that aren't related. Treating two things as the same when in fact they aren't. eg. Obesity often conflated with lack of fitness.
Misrepresenting and exaggerating one part of the opponents argument in order to dismiss it and the entire arguement. Changing or exaggerating an opponent's position or argument to make it easier to refute.
Wrongly assumes a cause-and-effect relationship ('A' causes 'B' without proof that a relationship actually exists).
Making one or more unsupported leaps in an arguement to arrive at an extreme conclusion.
"People like dogs because dogs are kind pets which people like." Where a reason is the same as the conclusion, so the argument doesn't go anywhere as it just restates the argument rather than actually proving it.
Latin for "it does not follow." An inference or a conclusion that does not follow from the premises,evidence or reasoning given prior.
Latin meaning "against the man." In an argument, this is an attack on the person rather than on the opponent's ideas.
Latin for "after this, therefore because of this." Arguing that because one thing follows another, the first caused the second. But sequence is not cause.
Add or remove terms from this set | http://en.m.wikibooks.org/wiki/A-level_Critical_Thinking | 13 |
16 | First a quick recap of some definitions:
Argument: the most basic sort of reasoning; combines several ‘truth statements’ to arrive at a conclusion – showing/demonstrating it to be true.
Premise: must make a true or false claim (be a statement) - not all sentences do this (imperatives,questions, exclamations). If you are studying Higher Philosophy you will have to select these from a real argument. Look for give away words like [are, is etc.] to help you.
Conclusion: the product of an argument’s reasoning, i.e. what it is hoped is shown to be true by the argument. Again for Higher Philosophy students look for words like [since, in conclusion, therefore etc.].
Now we need to sort out some different sorts of arguments which again (surprise surprise) philosophers give big names to. Here are two arguments:
1 Michael lives in Cape Town
2 Cape Town is in South Africa
C Michael lives in South Africa
1 all the other days have not been the end of the world
2 tomorrow is another day
C tomorrow will not be the end of the world
At first glance, these two arguments are very familiar, but on closer inspection there is a subtle but important difference. In the first, the conclusion absolutely must be true if both of the premises are. Philosophers call this sort of argument a deductive argument. In the second argument, however, the conclusion does not absolutely have to be true even if both the premises are. Scary as it may be, all the other days that have gone by simply do not ensure that tomorrow will come. They may (and probably do) make it highly probable that tomorrow will come, but I cannot claim the conclusion with absolute certainty. Philosophers call this sort of reasoning an inductive argument. Another example might be something like:
1 All the cats I have seen or heard of have tails.
2 My friend says she has a cat called Floyd.
C Floyd has a tail
Again, this argument does seem persuasive, but to say that it is convincing beyond any doubt would be untrue. No matter how small a chance it is, I have to admit that there is a chance that Floyd may belong to a species of cat that has no tail, or has been involved in a fight or accident in the past that has led to the loss of his tail. | http://edubuzz.org/errorsinreasoning/arguments/understanding-arguments/moving-on-with-arguments-part-1-higher-ib-philosophy/ | 13 |
15 | An example of a syllogism is: “All men are human; all humans are mortal; therefore all men are mortal.”
Origin of SYLLOGISM
Middle English silogisme, from Anglo-French sillogisme, from Latin syllogismus, from Greek syllogismos, from syllogizesthai to syllogize, from syn- + logizesthai to calculate, from logos reckoning, word — more at legend
Form of argument that, in its most commonly discussed instances, has two categorical propositions as premises and one categorical proposition as conclusion. An example of a syllogism is the following argument: Every human is mortal (every M is P); every philosopher is human (every S is M); therefore, every philosopher is mortal (every S is P). Such arguments have exactly three terms (human, philosopher, mortal). Here, the argument is composed of three categorical (as opposed to hypothetical) propositions, it is therefore a categorical syllogism. In a categorical syllogism, the term that occurs in both premises but not in the conclusion (human) is the middle term; the predicate term in the conclusion is called the major term, the subject the minor term. The pattern in which the terms S, M, and P (minor, middle, major) are arranged is called the figure of the syllogism. In this example, the syllogism is in the first figure, since the major term appears as predicate in the first premise and the minor term as subject of the second. | http://www.merriam-webster.com/dictionary/syllogisms | 13 |
18 | Assumption questions are relatively common on the GRE Critical Reasoning component of the Reading Comprehension section. You know you are dealing with an assumption when the question is something like this:
Which of the following is an assumption on which the argument depends?
Much like Strengthen/Weaken questions, Assumption questions require you to identify the premises and the conclusions. Once you’ve done so, keep in mind that the conclusion cannot be properly drawn without an additional piece of information. This missing information is the assumption. In other words, there is a gap in the argument, and unless it is filled, so to speak, the conclusion is not valid.
Let’s take a look at the following example:
Maria studied 3 months for the GRE. Therefore she will score in the top 10%.
In this mini-argument, the premise is that Maria studied 3 months. From this a conclusion is drawn: she will score in the top 10%. What is the assumptionhere? That one only need study for 3 months to score in the top 10%.
Without this assumption, we cannot draw a valid conclusion. The key to answer an assumption question correctly is to identifying what is an assumption, and what is not assumption.
Negating the Assumption
To do so I am going to introduce a technique called Negating the Assumption. The idea is if you take an answer choice and turn it into its opposite form then the conclusion will fall apart. If you negate an answer choice and it does not do anything to the conclusion, then you know that answer choice is not an assumption the argument depends on. Look at the answers below:
- Studying for three months automatically means a student will score in the top 10% on the GRE.
- Using prep materials is the only way to score in the top 10%.
Let’s negate both of these answer choices to see what happens.
- Studying for three months does NOT automatically mean a student will score in the top 10%.
- Using prep materials is NOT the only way to score in the top 10%.
By negating (A), the argument falls apart. If studying for three months does not automatically mean scoring in the top 10%, then the conclusion above that Maria will score in the 10% cannot be validly drawn.
By negating (B), we do not affect the conclusion, so we know that answer choice (B) is not an assumption the argument depends on.
Magoosh Practice Question
Now’s let’s try an actual question from Magoosh. Remember to first identify the premises and conclusion. Next make sure to negate each answer choice. Also, make sure to pick the best answer. Don’t pick an answer because when you negate the assumption the conclusion kind of falls apart. When you negate the answer choice, the conclusion will completely fall apart. | http://magoosh.com/gre/2012/gre-critical-reasoning-question-type-assumption-questions/ | 13 |
21 | syllogism, a mode of argument that forms the core of the body of Western logical thought. Aristotle defined syllogistic logic, and his formulations were thought to be the final word in logic; they underwent only minor revisions in the subsequent 2,200 years. Every syllogism is a sequence of three propositions such that the first two imply the third, the conclusion. There are three basic types of syllogism: hypothetical, disjunctive, and categorical. The hypothetical syllogism, modus ponens, has as its first premise a conditional hypothesis: If p then q; it continues: p, therefore q. The disjunctive syllogism, modus tollens, has as its first premise a statement of alternatives: Either p or q; it continues: not q, therefore p. The categorical syllogism comprises three categorical propositions, which must be statements of the form all x are y, no x is y, some x is y, or some x is not y. A categorical syllogism contains precisely three terms: the major term, which is the predicate of the conclusion; the minor term, the subject of the conclusion; and the middle term, which appears in both premises but not in the conclusion. Thus: All philosophers are men (middle term); all men are mortal ; therefore, All philosophers (minor term) are mortal (major term). The premises containing the major and minor terms are named the major and minor premises, respectively. Aristotle noted five basic rules governing the validity of categorical syllogisms: The middle term must be distributed at least once (a term is said to be distributed when it refers to all members of the denoted class, as in all x are y and no x is y ); a term distributed in the conclusion must be distributed in the premise in which it occurs; two negative premises imply no valid conclusion; if one premise is negative, then the conclusion must be negative; and two affirmatives imply an affirmative. John Venn, an English logician, in 1880 introduced a device for analyzing categorical syllogisms, known as the Venn diagram. Three overlapping circles are drawn to represent the classes denoted by the three terms. Universal propositions ( all x are y, no x is y ) are indicated by shading the sections of the circles representing the excluded classes. Particular propositions ( some x is y, some x is not y ) are indicated by placing some mark, usually an "X," in the section of the circle representing the class whose members are specified. The conclusion may then be read directly from the diagram.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Philosophy, Terms and Concepts | http://www.infoplease.com/encyclopedia/society/syllogism.html | 13 |
29 | The term “file” in Unix covers several types of objects:
The file concept includes both the data contained in the file and information about the file itself (also called meta-data) like its type, its access rights, the latest access times, etc.
To a first approximation, the file system can be considered to be a tree. The root is
'/'. The branches are labeled by (file) names,
which are strings of any characters excluding
(but it is good practice to also avoid non-printing characters and
spaces). The non-terminal nodes are directories: these nodes
always contain two branches
.. which respectively
represent the directory itself and the directory’s parent. The other
nodes are sometimes called files, as opposed to directories,
but this is ambiguous, as we can also designate any node as a
“file”. To avoid all ambiguity we refer to them as
The nodes of the tree are addressed by paths. If the start of the path
is the root of the file hierarchy, the path is absolute, whereas if the
start is a directory it is relative. More precisely, a relative
path is a string of file names separated by the character
'/'. An absolute path is a relative path preceded by the
'/' (note the double use of this character both as
a separator and as the name of the root node).
The Filename module handles paths in a portable
manner. In particular, concat concatenates paths without
referring to the character
'/', allowing the code to function equally
well on other operating systems (for example, the path separator character
under Windows is
'\'). Similarly, the
provides the string values current_dir_name and
parent_dir_name to represent the branches
.. The functions basename and
dirname return the prefix
d and the suffix
b from a path
p such that the paths
d/b refer to the same file, where
d is the directory in
which the file is found and
b is the name of the file. The
functions defined in
Filename operate only on paths,
independently of their actual existence within the file hierarchy.
In fact, strictly speaking, the file hierarchy is not a tree. First
.. allow a directory to refer to
itself and to move up in the hierarchy to define paths leading from a
directory to itself. Moreover, non-directory files can have many
parents (we say that they have many hard links). Finally,
there are also symbolic links which can be seen as
non-directory files containing a path. Conceptually, this path can be
obtained by reading the contents of the symbolic link like an ordinary
file. Whenever a symbolic link occurs in the middle of a path we have
to follow its path transparently. If
s is a symbolic link whose
value is the path
l, then the path
p/s/q represents the file
l is an absolute path or the file
l is a relative path.
Figure 1 gives an example of a file hierarchy. The
11 corresponding to the path
path value is the relative path
../gnu, does not refer to any
existing file in the hierarchy (at the moment).
In general, a recursive traversal of the hierarchy will terminate if the following rules are respected:
But if symbolic links are followed we are traversing a graph and we need to keep track of the nodes we have already visited to avoid loops.
Each process has a current working directory. It is returned by the
function getcwd and can be changed with
chdir. It is also possible to constrict the
view of the file hierarchy by calling
p. This makes the node
should be a directory, the root of the restricted view of the
hierarchy. Absolute file paths are then
interpreted according to this new root
p (and of course
.. at the
new root is
There are two ways to access a file. The first is by its file
name (or path name) in the file system hierarchy. Due to
hard links, a file can have many different names. Names are values of
string. For example the system calls unlink,
link, symlink and rename all operate at
the file name level.
Their effect is as follows:
unlink ferases the file
flike the Unix command
rm -f f.
link f1 f2creates a hard link named
f2to the file
f1like the command
ln f1 f2.
symlink f1 f2creates a symbolic link named
f2to the file
f1like the command
ln -s f1 f2.
rename f1 f2renames the file
f2like the command
mv f1 f2.
The second way of accessing a file is by a file descriptor. A descriptor represents a pointer to a file along with other information like the current read/write position in the file, the access rights of the file (is it possible to read? write?) and flags which control the behavior of reads and writes (blocking or non-blocking, overwrite, append, etc.). File descriptors are values of the abstract type file_descr.
Access to a file via its descriptor is independent from the access via its name. In particular whenever we get a file descriptor, the file can be destroyed or renamed but the descriptor still points on the original file.
When a program is executed, three descriptors are allocated and
tied to the variables
stderr of the
They correspond, respectively, to the standard input, standard output and standard error of the process.
When a program is executed on the command line without any
redirections, the three descriptors refer to the terminal. But if,
for example, the input has been redirected using the shell expression
cmd < f, then the descriptor
stdin refers to the file named
during the execution of the command
cmd > f
cmd 2> f respectively bind the descriptors
stderr to the file named
f during the execution of the
The system calls stat, lstat and fstat return the meta-attributes of a file; that is, information about the node itself rather than its content. Among other things, this information contains the identity of the file, the type of file, the access rights, the time and date of last access and other information.
The system calls
lstat take a file name as an
fstat takes a previously opened descriptor and
returns information about the file it points to.
lstat differ on symbolic links :
lstat returns information
about the symbolic link itself, while
stat returns information
about the file that the link points to. The result of these three
calls is a record of type stats whose fields are
described in table 1.
|The id of the device on which the file is stored.|
|The id of the file (inode number) in its partition.
The pair |
|The file type. The type file_kind is an enumerated type
whose constructors are:
|Access rights for the file|
|For a directory: the number of entries in the directory. For others: the number of hard links to this file.|
|The id of the file’s user owner.|
|The id of the file’s group owner.|
|The id of the associated peripheral (for special files).|
|The file size, in bytes.|
|Last file content access date (in seconds from January 1st 1970, midnight, gmt).|
|Last file content modification date (idem).|
|Last file state modification date: either a
write to the file or a change in access rights, user or group owner,
or number of links.
A file is uniquely identified by the pair made of its device number
(typically the disk partition where it is located)
st_dev and its
A file has one user owner
st_uid and one group owner
st_gid. All the users and groups
on the machine are usually described in the
/etc/groups files. We can look up them by
name in a portable manner with the functions getpwnam and
getgrnam or by id with
getpwuid and getgrgid.
The name of the user of a running process and all the groups to which it belongs can be retrieved with the commands getlogin and getgroups.
The call chown changes the owner (second argument) and the group (third argument) of a file (first argument). If we have a file descriptor, fchown can be used instead. Only the super user can change this information arbitrarily.
Access rights are encoded as bits in an integer, and the type
file_perm is just an abbreviation for the type
int. They specify special bits and read, write and
execution rights for the user owner, the group owner and the other
users as vector of bits:
where in each of the user, group and other fields, the order of bits
indicates read (
r), write (
w) and execute (
The permissions on a file are the union of all these individual
rights, as shown in table 2.
|Bit (octal)||Notation ||Access right|
|executable by the user owner|
|writable by the user owner|
|readable by the user owner|
|executable by members of the group owner|
|writable by members of the group owner|
|readable by members of the group owner|
|executable by other users|
|writable by other users|
|readable by other users|
|the bit |
|the bit |
|the bit |
For files, the meaning of read, write and execute permissions is
obvious. For a directory, the execute permission means the right to
enter it (to
chdir to it) and read permission the right to list
its contents. Read permission on a directory is however not needed to
read its files or sub-directories (but we then need to know their
The special bits do not have meaning unless the
x bit is set (if
x set, they do not give additional rights). This
is why their representation is superimposed on the bit
T are used instead of
x is not set. The bit
t allows sub-directories to
inherit the permissions of the parent directory. On a directory,
s allows the use of the directory’s
than the user’s to create directories. For an executable file,
s allows the changing at execution time of the user’s
effective identity or group with the system calls setuid
The process also preserves its original identities unless
it has super user privileges, in which case
setgid change both its effective and original user and group
identities. The original identity is preserved to allow
the process to subsequently recover it as its effective identity
without needing further privileges. The system calls getuid and
getgid return the original identities and
geteuid and getegid return the effective identities.
A process also has a file creation mask encoded the same way file permissions are. As its name suggests, the mask specifies prohibitions (rights to remove): during file creation a bit set to 1 in the mask is set to 0 in the permissions of the created file. The mask can be consulted and changed with the system call umask:
Like many system calls that modify system variables, the modifying function returns the old value of the variable. Thus, to just look up the value we need to call the function twice. Once with an arbitrary value to get the mask and a second time to put it back. For example:
File access permissions can be modified with the system calls chmod and fchmod:
and they can be tested “dynamically” with the system call access:
where requested access rights to the file are specified by a list of
values of type access_permission whose meaning is
obvious except for
F_OK which just checks for the file’s
existence (without checking for the other rights). The function
raises an error if the access rights are not granted.
Note that the information inferred by
access may be more
restrictive than the information returned by
lstat because a file
system may be mounted with restricted rights — for example in
read-only mode. In that case
access will deny a write permission
on a file whose meta-attributes would allow it. This is why we
distinguish between “dynamic” (what a process can actually do)
and “static” (what the file system specifies) information.
Only the kernel can write in directories (when files are created). Thus opening a directory in write mode is prohibited. In certain versions of Unix a directory may be opened in read only mode and read with read, but other versions prohibit it. However, even if this is possible, it is preferable not to do so because the format of directory entries vary between Unix versions and is often complex. The following functions allow reading a directory sequentially in a portable manner:
The system call opendir returns a directory descriptor for a
directory. readdir reads the next entry of a descriptor, and
returns a file name relative to the directory or raises the exception
End_of_file if the end of the directory is
reached. rewinddir repositions the descriptor at the
beginning of the directory and closedir closes the directory
The following library function, in
Misc, iterates a
f over the entries of the directory
To create a directory or remove an empty directory, we have mkdir and rmdir:
The second argument of
mkdir determines the access rights of the
new directory. Note that we can only remove a directory that is
already empty. To remove a directory and its contents, it is thus
necessary to first recursively empty the contents of the directory and
then remove the directory.
The Unix command
find lists the files of a hierarchy matching
certain criteria (file name, type and permissions etc.). In this
section we develop a library function
implements these searches and a command
find that provides a version
of the Unix command
find that supports the options
We specify the following interface for
The function call
traverses the file hierarchy starting from the roots specified in the
roots (absolute or relative to the current directory of the
process when the call is made) up to a maximum depth
depth and following
symbolic links if the flag
follow is set. The paths found under
r as a prefix. Each found path
given to the function
action along with the data returned by
Unix.lstat p (or
Unix.stat p if
action returns a boolean indicating, for
directories, whether the search should continue for its contents (
or not (
handler function reports traversal errors of type
Unix_error. Whenever an error occurs the arguments of the
exception are given to the handler function and the traversal
continues. However when an exception is raised by the functions
handler themselves, we immediately stop the
traversal and let it propagate to the caller. To propagate an
Unix_error exception without catching it like a traversal error,
we wrap these exceptions in the
Hidden exception (see
A directory is identified by the
id pair (line 12)
made of its device and inode number. The list
track of the directories that have already been visited. In fact
this information is only needed if symbolic links are followed
It is now easy to program the
find command. The essential part of
the code parses the command line arguments with the Arg
find command is quite limited, the library
FindLib.find is far more general, as the following
which, starting from the current directory, recursively prints
files without printing or entering directories whose name is
getcwd is not a system call but is defined in the
Unix module. Give a “primitive” implementation of
getcwd. First describe the principle of your algorithm with words
and then implement it (you should avoid repeating the same system
openfile function allows us to obtain a descriptor for
a file of a given name (the corresponding system call
is open, however
open is a keyword in OCaml).
The first argument is the name of the file to open. The second argument, a list of flags from the enumerated type open_flag, describes the mode in which the file should be opened and what to do if it does not exist. The third argument of type file_perm defines the file’s access rights, should the file be created. The result is a file descriptor for the given file name with the read/write position set to the beginning of the file.
The flag list must contain exactly one of the following flags:
|Open in read-only mode.|
|Open in write-only mode.|
|Open in read and write mode.|
These flags determine whether read or write calls can be done on the
descriptor. The call
openfile fails if a process requests an open
in write (resp. read) mode on a file on which it has no right to
write (resp. read). For this reason
O_RDWR should not be used
The flag list can also contain one or more of the following values:
|Open in append mode.|
|Create the file if it does not exist.|
|Truncate the file to zero if it already exists.|
|Fail if the file already exists.|
|Open in non-blocking mode.|
|Do not function in console mode.|
|Perform the writes in synchronous mode.|
|Perform the data writes in synchronous mode.|
|Perform the reads in synchronous mode.|
The first group defines the behavior to follow if the file exists or not. With:
O_APPEND, the read/write position will be set at the end of the file before each write. Consequently any written data will be added at the end of file. Without
O_APPEND, writes occur at the current read/write position (initially, the beginning of the file).
O_TRUNC, the file is truncated when it is opened. The length of the file is set to zero and the bytes contained in the file are lost, and writes start from an empty file. Without
O_TRUNC, the writes are made at the start of the file overwriting any data that may already be there.
O_CREAT, creates the file if it does not exist. The created file is empty and its access rights are specified by the third argument and the creation mask of the process (the mask can be retrieved and changed with umask).
openfilefails if the file already exists. This flag, used in conjunction with
O_CREATallows to use files as locks1. A process which wants to take the lock calls
openfileon the file with
O_CREAT. If the file already exists, this means that another process already holds the lock and
openfileraises an error. If the file does not exist
openfilereturns without error and the file is created, preventing other processes from taking the lock. To release the lock the process calls
unlinkon it. The creation of a file is an atomic operation: if two processes try to create the same file in parallel with the options
O_CREAT, at most one of them can succeed. The drawbacks of this technique is that a process must busy wait to acquire a lock that is currently held and the abnormal termination of a process holding a lock may never release it.
Most programs use
0o666 for the third argument
openfile. This means
rw-rw-rw- in symbolic notation.
With the default creation mask of
file is thus created with the permissions
rw-r--r--. With a more
lenient mask of
0o002, the file is created with the permissions
To read from a file:
The third argument can be anything as
O_CREAT is not specified, 0
is usually given.
To write to an empty a file without caring about any previous content:
If the file will contain executable code (e.g. files
ld, scripts, etc.), we create it with execution permissions:
If the file must be confidential (e.g. “mailbox” files where
To append data at the end of an existing file or create it if it doesn’t exist:
O_NONBLOCK flag guarantees that if the file is a named pipe
or a special file then the file opening and subsequent reads and
writes will be non-blocking.
O_NOCTYY flag guarantees that if the file is a control
terminal (keyboard, window, etc.), it won’t become the controlling
terminal of the calling process.
The last group of flags specifies how to synchronize read and write operations. By default these operations are not synchronized. With:
O_DSYNC, the data is written synchronously such that the process is blocked until all the writes have been done physically on the media (usually a disk).
O_SYNC, the file data and its meta-attributes are written synchronously.
O_DSYNCspecifies that the data reads are also synchronized: it is guaranteed that all current writes (requested but not necessarily performed) to the file are really written to the media before the next read. If
O_RSYNCis provided with
O_SYNCthe above also applies to meta-attributes changes.
The system calls read and write read and write
bytes in a file. For historical reasons, the system
write is provided in OCaml under the name
The two calls
single_write have the same
interface. The first argument is the file descriptor to act on. The
second argument is a string which will hold the read bytes (for
read) or the bytes to write (for
single_write). The third
argument is the position in the string of the first byte to be written
or read. The fourth argument is the number of the bytes to be read or
written. In fact the third and fourth argument define a sub-string of
the second argument (the sub-string should be valid,
single_write do not check this).
single_write return the number of bytes actually
read or written.
Reads and write calls are performed from the file descriptor’s current
read/write position (if the file was opened in
this position is set at the end of the file prior to any
write). After the system call, the current position is advanced by
the number of bytes read or written.
For writes, the number of bytes actually written is usually the number of bytes requested. However there are exceptions: (i) if it is not possible to write the bytes (e.g. if the disk is full) (ii) the descriptor is a pipe or a socket open in non-blocking mode (iii) due to OCaml, if the write is too large.
The reason for (iii) is that internally OCaml uses auxiliary
buffers whose size is bounded by a maximal value. If this value is
exceeded the write will be partial. To work around this problem
OCaml also provides the function write which
iterates the writes until all the data is written or an error occurs.
The problem is that in case of error there’s no way to know the number
of bytes that were actually written. Hence
single_write should be
preferred because it preserves the atomicity of writes (we know
exactly what was written) and it is more faithful to the original Unix
system call (note that the implementation of
described in section 5.7).
fd is a descriptor open in write-only mode.
writes the characters
"lo worl" in the corresponding file,
and returns 7.
For reads, it is possible that the number bytes actually read is
smaller than the number of requested bytes. For example when the end
of file is near, that is when the number of bytes between the current
position and the end of file is less than the number of requested
bytes. In particular, when the current position is at the end of file,
read returns zero. The convention “zero equals end of
file” also holds for special files, pipes and sockets. For example,
read on a terminal returns zero if we issue a
ctrl-D on the
Another example is when we read from a terminal. In that case,
read blocks until an entire line is available. If the line length
is smaller than the requested bytes
read returns immediately with
the line without waiting for more data to reach the number of
requested bytes. (This is the default behavior for terminals, but it
can be changed to read character-by-character instead of
line-by-line, see section 2.13 and the type
terminal_io for more details.)
The following expression reads at most 100 characters from standard input and returns them as a string.
really_read below has the same interface as
read, but makes additional read attempts to try to get
the number of requested bytes. It raises the exception
End_of_file if the end of file is reached while doing this.
The system call close closes a file descriptor.
Once a descriptor is closed, all attempts to read, write, or do
anything else with the descriptor will fail. Descriptors should be
closed when they are no longer needed; but it is not mandatory. In
particular, and in contrast to
channels, a file descriptor doesn’t need to be closed to ensure that
all pending writes have been performed as write requests made with
write are immediately transmitted to the kernel. On the other
hand, the number of descriptors allocated by a process is limited by
the kernel (from several hundreds to thousands). Doing a
an unused descriptor releases it, so that the process does not run out
We program a command
file_copy which, given two arguments
f2, copies to the file
f2 the bytes contained
The bulk of the work is performed by the the function
First we open a descriptor in read-only mode on the input file and
another in write-only mode on the output file.
If the output file already exists, it is truncated (option
O_TRUNC) and if it does not exist it is created (option
O_CREAT) with the permissions
rw-rw-rw- modified by the creation
mask. (This is unsatisfactory: if we copy an executable file, we would
like the copy to be also executable. We will see later how to give
a copy the same permissions as the original.)
copy_loop function we do the copy by blocks of
buffer_size bytes. We request
buffer_size bytes to read. If
read returns zero, we have reached the end of file and the copy
is over. Otherwise we write the
r bytes we have read in the
output file and start again.
Finally, we close the two descriptors. The main program
verifies that the command received two arguments and passes them to
Any error occurring during the copy results in a
caught and displayed by
handle_unix_error. Example of errors
include inability to open the input file because it does not
exist, failure to read because of restricted permissions, failure to
write because the disk is full, etc.
Add an option
-a to the program, such that
file_copy -a f1 f2 appends the contents of
f1 to the end of
In the example
file_copy, reads were made in blocks of 8192
bytes. Why not read byte per by byte, or megabyte per by megabyte?
The reason is efficiency.
Figure 2 shows the copy speed of
bytes per second, against the size of blocks (the value
buffer_size). The amount of data transferred is the same
regardless of the size of the blocks.
For small block sizes, the copy speed is almost proportional to the
block size. Most of the time is spent not in data transfers but in the
execution of the loop
copy_loop and in the calls to
write. By profiling more carefully we can see that most of the
time is spent in the calls to
write. We conclude
that a system call, even if it has not much to do, takes a minimum of
about 4 micro-seconds (on the machine that was used for the test — a
2.8 GHz Pentium 4 ), let us say from 1 to 10 microseconds. For small
input/output blocks, the duration of the system call dominates.
For larger blocks, between 4KB and 1MB, the copy speed is constant and maximal. Here, the time spent in system calls and the loop is small relative to the time spent on the data transfer. Also, the buffer size becomes bigger than the cache sizes used by the system and the time spent by the system to make the transfer dominates the cost of a system call2.
Finally, for very large blocks (8MB and more) the speed is slightly under the maximum. Coming into play here is the time needed to allocate the block and assign memory pages to it as it fills up.
The moral of the story is that, a system call, even if it does very little work, costs dearly — much more than a normal function call: roughly, 2 to 20 microseconds for each system call, depending on the architecture. It is therefore important to minimize the number of system calls. In particular, read and write operations should be made in blocks of reasonable size and not character by character.
In examples like
file_copy, it is not difficult to do
input/output with large blocks. But other types of programs are more
naturally written with character by character input or output (e.g.
reading a line from a file, lexical analysis, displaying a number etc.).
To satisfy the needs of these programs, most systems provide
input/output libraries with an additional layer of software between
the application and the operating system. For example, in OCaml the
Pervasives module defines the abstract types
out_channel, similar to file descriptors, and
functions on these types like input_char,
output_string. This layer uses buffers to
group sequences of character by character reads or writes into a
single system call to read or write. This results in better
performance for programs that proceed character by character.
Moreover this additional layer makes programs more portable: we just
need to implement this layer with the system calls provided by another
operating system to port all the programs that use this library on
this new platform.
To illustrate the buffered input/output techniques, we implement a fragment
Pervasives library. Here is the interface:
We start with the “input” part. The abstract type
in_channel is defined as follows:
The character string of the
in_buffer field is, literally, the
buffer. The field
in_fd is a (Unix) file descriptor, opened on
the file to read. The field
in_pos is the current read position
in the buffer. The field
in_end is the number of valid
characters preloaded in the buffer.
in_end will be modified in place during
read operations; we therefore declare them as
When we open a file for reading, we create a buffer of reasonable size
(large enough so as not to make too many system calls; small enough so
as not to waste memory). We then initialize the field
a Unix file descriptor opened in read-only mode on the given file. The
buffer is initially empty (it does not contain any character from the
file); the field
in_end is therefore initialized to zero.
To read a character from an
in_channel, we do one of two
things. Either there is at least one unread character in the buffer;
that is to say, the field
in_pos is less than the field
in_end. We then return this character located at
in_pos. Or the buffer is empty and we call
refill the buffer. If
read returns zero, we have reached the end
of the file and we raise the exception
End_of_file. Otherwise, we
put the number of characters read in the field
in_end (we may
receive less characters than we requested, thus the buffer may be
only partially refilled) and we return the first character read.
in_channel just closes the underlying Unix file descriptor.
The “output” part is very similar to the “input” part. The only asymmetry is that the buffer now contains incomplete writes (characters that have already been buffered but not written to the file descriptor), and not reads in advance (characters that have buffered, but not yet read).
To write a character on an
out_channel, we do one of two things.
Either the buffer is not full and we just store the character in the
buffer at the position
out_pos and increment that value. Or the
buffer is full and we empty it with a call to
write and then
store the character at the beginning of the buffer.
When we close an
out_channel, we must not forget to write the
buffer contents (the characters from 0 to
out_pos - 1) to the
file otherwise the writes made on the channel since the last time
the buffer was emptied would be lost.
which behaves like a sequence of
output_char on each
character of the string, but is more efficient.
The system call lseek allows to set the current read/write position of a file descriptor.
The first argument is the file descriptor and the second one the desired position. The latter is interpreted according to the value of the third argument of type seek_command. This enumerated type specifies the kind of position:
|Absolute position. The second argument specifies the character number to point to. The first character of a file is at position zero.|
|Position relative to the current position. The second argument is an offset relative to the current position. A positive value moves forward and a negative value moves backwards.|
|Position relative to the end of file. The
second argument is an offset relative to the end of file.
As for |
The value returned by
lseek is the resulting absolute
An error is raised if a negative absolute position is
requested. The requested position can be located after the end
of file. In that case, a
read returns zero (end of
file reached) and a
write extends the file with zeros until
that position and then writes the supplied data.
To position the cursor on the 1000th character of a file:
To rewind by one character:
To find out the size of a file:
For descriptors opened in
O_APPEND mode, the read/write position
is automatically set at the end of the file before each write. Thus
lseek is useless to set the write position, it may however
be useful to set the read position.
The behavior of
lseek is undefined on certain type of files for
which absolute access is meaningless: communication devices (pipes,
sockets) but also many special files like the terminal.
In most Unix implementations a call to
lseek on these files is
simply ignored: the read/write position is set but read/write
operations ignore it. In some implementations,
lseek on a pipe or
a socket triggers an error.
tail displays the last n lines of a file.
How can it be implemented efficiently on regular files? What can we
do for the other kind of files? How can the option
In Unix, data communication is done via file descriptors representing either permanent files (files, peripherals) or volatile ones (pipes and sockets, see chapters 5 and 6). File descriptors provide a uniform and media-independent interface for data communication. Of course the actual implementation of the operations on a file descriptor depends on the underlying media.
However this uniformity breaks when we need to access all the features provided by a given media. General operations (opening, writing, reading, etc.) remain uniform on most descriptors but even, on certain special files, these may have an ad hoc behavior defined by the kind of peripheral and its parameters. There are also operations that work only with certain kind of media.
We can shorten a normal file with the system calls truncate and ftruncate.
The first argument is the file to truncate and the second the desired size. All the data after this position is lost.
Most operations on files “follow” symbolic links in the sense that they do not apply to the link itself but to the file on which the link points (for example openfile, stat, truncate, opendir, etc.).
The two system calls symlink and readlink operate specifically on symbolic links:
symlink f1 f2 creates the file
f2 as a symbolic
f1 (like the Unix command
ln -s f1 f2). The call
readlink returns the content of a symbolic link, i.e. the name of
the file to which the link points.
Special files can be of “character” or “block” type. The former are character streams: we can read or write characters only sequentially. These are the terminals, sound devices, printers, etc. The latter, typically disks, have a permanent medium: characters can be read by blocks and even seeked relative to the current position.
Among the special files, we may distinguish:
|This is the black hole which swallows
everything we put into and from which nothing comes out. This is
extremely useful for ignoring the results of a process: we redirect
its output to |
|These are the control terminals.|
|These are the pseudo-terminals: they are not real terminals but simulate them (they provide the same interface).|
|These are the disks.|
|Under Linux, system parameters organized as a file system. They allow reads and writes.|
The usual file system calls on special files can behave differently.
However, most special files (terminals, tape drives, disks, etc.)
write in the obvious manner (but
sometimes with restrictions on the number of bytes written or read),
but many ignore lseek.
In addition to the usual file system calls, special files which represent peripherals must be commanded and/or configured dynamically. For example, for a tape drive, rewind or fast forward the tape; for a terminal, choice of the line editing mode, behavior of special characters, serial connection parameters (speed, parity, etc.). These operations are made in Unix with the system call ioctl which group together all the particular cases. However, this system call is not provided by OCaml; it is ill-defined and cannot be treated in a uniform way.
Terminals and pseudo-terminals are special files of type character which can be configured from OCaml. The system call tcgetattr takes a file descriptor open on a special file and returns a structure of type terminal_io which describes the status of the terminal according to the posix standard.
This structure can be modified and given to the function tcsetattr to change the attributes of the peripheral.
The first argument is the file descriptor of the peripheral. The last
argument is a structure of type
terminal_io describing the
parameters of the peripheral as we want them. The second argument is a
value of the enumerated type setattr_when that
indicates when the change must be done: immediately (
after having transmitted all written data (
TCSADRAIN) or after
having read all the received data (
recommended for changing write parameters and
TCSAFLUSH for read
When a password is read, characters entered by the user should not be echoed if the standard input is connected to a terminal or a pseudo-terminal.
read_passwd function starts by getting the current settings
of the terminal connected to
stdin. Then it defines a modified
version of these in which characters are not echoed. If this fails the
standard input is not a control terminal and we just read a
line. Otherwise we display a message, change the terminal settings, read the
password and put the terminal back in its initial state. Care must be
taken to set the terminal back to its initial state even after a read
Sometimes a program needs to start another and connect its standard input
to a terminal (or pseudo-terminal). OCaml does not provide any
support for this3. To achieve that, we must manually look among the
pseudo-terminals (in general, they are files with names in the form of
/dev/tty[a-z][a-f0-9]) and find one that is not already open. We
can then open this file and start the program with this file on its
Four other functions control the stream of data of a terminal (flush waiting data, wait for the end of transmission and restart communication).
The function tcsendbreak sends an interrupt to the
peripheral. The second argument is the duration of the interrupt
0 is interpreted as the default value for the
The function tcdrain waits for all written data to be transmitted.
Depending on the value of the second argument, a call to the
function tcflush discards the data written but not yet
TCIFLUSH), or the data received but not yet read
TCOFLUSH) or both (
Depending on the value of the second argument, a call to the
function tcflow suspends the data transmission
TCOOFF), restarts the transmission (
TCOON), sends a control
character stop or start to request the
transmission to be suspended (
TCIOFF) or restarted (
The function setsid puts the process in a new session and detaches it from the terminal.
Two processes can modify the same file in parallel; however, their
writes may collide and result in inconsistent data. In some cases data
is always written at the end and opening the file with
prevents this. This is fine for
log files but it does not
work for files that store, for example, a database because writes are
performed at arbitrary positions. In that case processes using the
file must collaborate in order not to step on each others toes. A
lock on the whole file can be implemented with an auxiliary file (see
page ??) but the system call lockf allows
for finer synchronization patterns by locking only parts of a file.
We extend the function
file_copy (section 2.9) to
support symbolic links and directories in addition to normal files.
For directories, we recursively copy their contents.
To copy normal files we reuse the function
file_copy we already
set_infos below modifies the owner, the
access rights and the last dates of access/modification
of a file. We use it to preserve this information for copied files.
The system call utime modifies the dates of access and
modification. We use
chown to re-establish
the access rights and the owner. For normal users, there are
a certain number of cases where
chown will fail with a
“permission denied” error. We catch this error and ignore it.
Here’s the main recursive function.
We begin by reading the information of the
source file. If it is
a normal file, we copy its contents with
file_copy and its
set_infos. If it is a symbolic link, we read
where it points to and create a link pointing to the same object. If
it is a directory, we create a destination directory, then we read the
directory’s entries (ignoring the entries about the directory itself
or its parent) and recursively call
copy_rec for each entry. All
other file types are ignored, with a warning.
The main program is straightforward:
Copy hard links cleverly. As written above
copy_rec creates n
duplicates of the same file whenever a file occurs under n different
names in the hierarchy to copy. Try to detect this situation, copy
the file only once and make hard links in the destination hierarchy.
tar file format (for
archive) can store a file
hierarchy into a single file. It can be seen as a mini file system.
In this section we define functions to read and write
files. We also program a command
readtar such that
displays the name of the files contained in the archive
readtar a f extracts the contents of the file
f contained in
a. Extracting the whole file hierarchy of an archive and
generating an archive for a file hierarchy is left as an exercise.
tar archive is a set of records. Each record represents a
file; it starts with a header which encodes the information
about the file (its name, type, size, owners, etc.) and is followed by
the contents of the file. The header is a block of 512 bytes structured as
shown in table 3.
|108||8||octal||Id of user owner|
|116||8||octal||Id of group owner|
|124||12||octal||File size (in bytes)|
|136||12||octal||Date of last modification|
|265||32||string||Name of user owner|
|297||32||string||Name of group owner|
|329||8||octal||Peripheral major number|
|337||8||octal||Peripheral minor number|
'\000'; except the fields
The file contents is stored right after the header, its size is rounded to a multiple of 512 bytes (the extra space is filled with zeros). Records are stored one after the other. If needed, the file is padded with empty blocks to reach at least 20 blocks.
Since tar archives are also designed to be written on brittle media
and reread many years later, the header contains a
field which allows to detect when the header is damaged. Its value is
the sum of all the bytes of the header (to compute that sum we assume
checksum field itself is made of zeros).
kind header field encodes the file type in a byte as follows4:
Most of the cases correspond to the values of the Unix file type
file_kind stored in the
st_kind field of the
LINK is for hard links which
must lead to another file already stored within the archive.
is for ordinary file, but stored in a contiguous area of memory (this
is a feature of some file systems, we can treat it like an ordinary
link header field stores the link when
LINK. The fields
minor contain the major
and minor numbers of the peripheral when
BLK. These three fields are not used in other cases.
The value of the
kind field is naturally represented by a
variant type and the header by a record:
Reading a header is not very interesting, but it cannot be ignored.
An archive ends either at the end of file where a new record would
start or on a complete, but empty, block. To read a header we thus try
to read a block which must be either empty or complete. For that we
really_read function defined earlier. The end of file
should not be reached when we try to read a block.
To perform an operation in an archive, we need to read the records sequentially until we find the target of the operation. Usually we just need to read the header of each record without its contents but sometimes we also need to get back to a previous one to read its contents. As such we keep, for each record, its header and its location in the archive:
We define a general iterator that reads and accumulates the records
of an archive (without their contents). To remain general, the
f is abstracted. This allows to use the
same iterator function to display records, destroy them, etc.
fold_aux starts from a position
offset with a
accu. It moves to
offset where a record
should start, reads a header, constructs the record
r and starts
again at the end of the record with the new (less partial) result
f r accu. It stops when there’s no header: the end of the archive
We just display the name of records without keeping them:
readtar a f must look for the file
f in the
archive and, if it is a regular file, display its contents. If
is a hard link on
g in the archive, we follow the link and
g since even though
g are represented
differently in the archive they represent the same file. The fact that
f is a link on the other or vice versa depends only on
the order in which the files were traversed when the archive was
created. For now we do not follow symbol links.
Hard link resolution is done by the following mutually recursive functions:
find_regular finds the regular file corresponding to
r is a regular file itself,
r is a hard link the function looks for the regular
file in the archive’s previous records stored in
list with the
find_file. In all other cases, the function aborts.
Once the record is found we just need to display its contents. After
positioning the descriptor at the start of the record’s contents this
operation is very similar to the
We now just need to combine these functions correctly.
We read the records in the archive (but not their contents) until we
find the record with the target name. We then call the function
find_regular to find the record that actually contains the file.
This second, backward, search must succeed if the archive is
well-formed. The first search may however fail if the target name is
not in the archive. In case of failure, the program takes care to
distinguish between these two cases.
Here is the main function which implements the command
Extend the command
readtar so that it follows symbolic links in
the sense that if the link points to a file of the archive that file’s
contents should be extracted.
Write a command
untar such that
untar a extracts and creates
all the files in the archive
a (except special files)
restoring if possible the information about the files
(owners, permissions) as found in the archive.
The file hierarchy should be reconstructed in the current working
directory of the
untar command. If the archive tries to create
files outside a sub-directory of the current working directory this
should be detected and prohibited. Nonexistent directories not explicitly
mentioned in the archive should be created with the user’s default
Write a program
tar such that
tar -xvf a f1 f2 ...
constructs the archive
a containing the list of files
f2, etc. and their sub-directories.
writesystem calls to make the complete transfer — see the discussion in section 5.7. But this limit is bigger than the size of system caches and it is not observable. | http://ocamlunix.forge.ocamlcore.org/files.html | 13 |
22 | These questions will test your logical thinking skills. You’ll be given an argument in the form of a short paragraph. You’ll then need to analyze conclusions, assumptions, and/or evidence given by the author. You could also be asked to identify additional data which could pinpoint strengths or weaknesses in the argument. Often, many of the answer choices may seem to be “shades of correct;” you should select the best option.
These questions are designed to test the reasoning skills involved in making arguments, evaluating arguments, and formulating or analyzing a plan of action. Questions are based on materials from a variety of sources, so familiarity with the specific subject matter is not necessary.
This section measures the test-taker’s ability to reason effectively in the following areas:
- Argument construction. Questions of this type may ask the test-taker to recognize the basic structure of an argument. You are also asked to grasp conclusions, underlying assumptions, well-supported explanatory hypotheses, and/or parallels between structurally similar arguments.
- Argument evaluation. Questions of this type may require that the test-taker analyze a given argument and recognize factors that would strengthen or weaken an argument. You are asked to identify reasoning errors and supplementary information that could either bolster or subvert the author’s contention.
- Formulating and evaluating a plan of action. Questions of this type may ask the test-taker to recognize the relative appropriateness, effectiveness, or efficiency of different plans of action. You are asked to recognize factors that would strengthen or weaken a proposed plan of action or its underlying assumptions.
The oldest pieces of Tlingit art found in the Pacific Northwest of North America date from about 2,500 years ago. However, a 4,000-year-old longboat was recently found in this region. This longboat resembles the Tlingit’s distinctive fishing vessels. Moreover, this longboat has features that have never been observed in the vessels of any other culture known to have inhabited North America. Therefore, the Tlingit almost certainly began to reside in the Pacific Northwest at least 4,000 years ago.
Which of the following, if true, most seriously weakens the argument?
Answer & Explanation
The author tells us two things about the Tlingit civilization: 1) the earliest piece of Tlingit art found in this region is 2,500 years old, and 2) a 4,000-year-old longboat that looks like a Tlingit vessel has recently been found in this region. Since no other known North American culture has used such boats, the author concludes that the Tlingit must have resided in the Pacific Northwest 4,000 years ago.
We are asked to weaken the author’s argument. The author’s argument assumes that the 4,000-year-old longboat can be attributed to Tlingit living in the Pacific Northwest. The answer should provide a reason to doubt this assumption.
Choice A states that when new cultures replace or absorb previous cultures in a region, they sometimes absorb the previous culture’s style of boat building. If this is true, then the Tlingit could have acquired their distinctive longboats from an older Pacific Northwest culture that has since disappeared. This previous culture could have created the 4,000-year-old longboat, undermining the author’s assumption that the Tlingit did so. This would weaken the author’s argument.
Choice B informs us that Tlingit oral history sheds no light on the date of the tribe’s first appearance in the Pacific Northwest. B therefore offers us a factor that can’t be used to judge the author’s argument, and is therefore incorrect.
Choice C mentions that the Tlingit’s longboats don’t generally contain Tlingit artwork, while Choice D notes a general connection between Tlingit fishing and Tlingit artwork. However, the author’s argument hinges on the attribution of one particular 4,000-year-old longboat to the Tlingit, to which choices C and D are not relevant.
Choice E confirms that fishing took place in the Pacific Northwest 4,000 years ago, but we already know that a 4,000-year-old longboat has been found here. Therefore, this offers us no new information pertaining to the author’s argument.
Choice A is correct.
The removal of hillsides and mountaintops, necessary for mining companies to extract coal quickly from deeply-buried seams, destroys forests. Experts therefore recommend that coal be extracted using time-consuming deep bore techniques.
Because public opinion opposes coal mining, some states now allow mining companies to extract coal from any particular site for only a short period of time.
The statements above, if true, best support which of the following conclusions?
Answer & Explanation
The passage states that mining companies can extract coal quickly only by removing hillsides and mountaintops. This destroys forests. Mining companies can also extract coal more slowly using deep bore techniques, a method which experts recommend.
Furthermore, public opinion is against coal mining, and as a result of public opinion, states allow mining companies to extract coal from a given site only for a short period of time.
The question asks for the conclusion that is best supported by these statements. The answer should be able to be inferred from the information in the passage.
Choice A states that mining companies will no longer be able to extract coal by removing hillsides and mountaintops. But despite experts’ recommendations, nothing prevents coal companies from using this technique.
Choice B makes the extreme claim that coal mining must be stopped entirely if forests are to be preserved. However, only one type of coal mining is linked in the passage to the destruction of forests; the type of mining that experts suggest is not.
Choice C predicts the development of new mining methods that allow quick removal of coal without the negative effects that contribute to the destruction of forests. Though the passage supports the conclusion that such advances would be desirable, it provides no evidence to conclude that they will definitely be developed.
Choice D states that public opinion works against preservation efforts. Public opinion results in states granting only short-term leases to mining companies. A mining company only able to operate in one place for a short period of time would have additional incentive to retrieve coal using the quick method of hillside removal rather than the time-consuming deep bore process. This, in turn, would lead to the destruction of forests. Choice D is correct.
Choice E states that granting mining companies short-term permits is unlikely to be a successful response to public opinion. However, the passage doesn’t state precisely what about mining bothers the public, so it’s not possible to conclude whether or not the state’s response will address public concerns effectively.
A major city uses income from tax revenues to fund incentives for high-end retailers from out of town to open stores in its new downtown shopping district. Although city taxes on such stores will generate tax revenues greater than the cost of the incentives, this practice is unwise. Locally based high-end retailers would open stores in the new shopping district without requiring the city to spend tax revenue on incentives.
Which of the following, if true, most strongly supports the city’s policy of offering cash incentives to out-of-town retailers?
Answer & Explanation
This passage presents two arguments about an issue.
The first is that of a city: this city concludes that it should use tax money to fund incentives to persuade retailers from out of town to open locations in its shopping district, based on the evidence that tax revenue generated by these retailers will be greater than the cost of the incentives.
The second is that of the author, who concludes that the city should not pursue this policy, based on the evidence that retailers from inside the city would move into its business district for free.
We are asked to support the city’s argument, which will likely involve weakening the argument of the author against the city’s policy. Our answer should give us additional evidence for why the city will be better off with out-of-town retailers in its shopping district, even though they must be paid to move there.
Choice A supports the author’s argument and weakens the city’s argument. If the city’s retailers are very similar to out-of-town retailers, the incentives would be a waste of money. Choice B cites a decline in the city’s tax revenues, but doesn’t give us any information about whether the city’s policy or the author’s plan will more effectively address this trend.
Choice C, however, does provide us with such information. If locally based businesses are exempt from city taxes, this means that the city won’t gain any tax revenues at all from these businesses. Therefore, despite the cost of the incentives, the city will gain a greater net tax revenue from out-of-town retailers’ stores than from locally-based retailers’ stores. This weakens the author’s argument and thus strengthens the city’s argument.
The status of cash incentives as a relatively recent phenomenon (choice D) doesn’t tell us anything about their effectiveness for the city, and there’s no necessary connection between the number of stores that a retailer operates elsewhere (choice E) and the ability of one of its stores to generate tax revenues. Therefore, choices D and E are irrelevant.
Choice C is correct. | http://www.knewton.com/gmat/free/sample-questions/critical-reasoning/ | 13 |
15 | WikiDoc Resources for
Evidence Based Medicine
Guidelines / Policies / Govt
Patient Resources / Community
Healthcare Provider Resources
Continuing Medical Education (CME)
Experimental / Informatics
DNA replication is the process of copying a double-stranded deoxyribonucleic acid (DNA) molecule, a process essential in all known life forms. The general mechanisms of DNA replication are different in prokaryotic and eukaryotic organisms.
As each DNA strand holds the same genetic information, both strands can serve as templates for the reproduction of the opposite strand. The template strand is preserved in its entirety and the new strand is assembled from nucleotides. This process is called semiconservative replication. The resulting double-stranded DNA molecules are identical; proofreading and error-checking mechanisms exist to ensure extremely high fidelity.
In a cell, DNA replication must happen before cell division. Prokaryotes replicate their DNA throughout the interval between cell divisions. In eukaryotes, timings are highly regulated and this occurs during the S phase of the cell cycle, preceding mitosis or meiosis I.
A DNA strand is a long polymer built from nucleotides; two complementary DNA strands form a double helix, each strand possessing a 5' phosphate end and a 3' hydroxyl end. The numbers followed by the prime indicate the position on the deoxyribose sugar backgone to which the phosphate or hydroxyl group is attached (numbers without primes are reserved for the bases). The two strands in the DNA backbone run anti-parallel to each other: One of the DNA strands is built in the 5' → 3' direction while the other runs in an anti parallel direction, although its information is stored in the 3' → 5' direction. Each nucleotide consists of a phosphate, a simple sugar or a deoxyribose sugar - forming the backbone of the DNA double helix - plus a base. The bonding angles of the backbone ensures that DNA will tend to twist as the length of the molecule progresses, giving rise a double helix shape instead of a straight ladder. Base pairs form the steps of the helix ladder while the sugars and phosphate molecules forms the handrail. Each of the four bases has a partner to which it makes the strongest hydrogen bonds. When a nucleotide base forms hydrogen bonds with its complementary base on the other strand, they form a base pair: Adenine pairs with thymine and cytosine pairs with guanine. These pairings can be expressed as C•G and A•T, or C:::G and A::T where the number of colons indicate the number of hydrogen bonds between each base pair. For example, a 10-base pair strand running in the 5' → 3' direction that has adenine as the 3rd base will pair with the base thymine as the 7th base on the complementary 10-base pair strand running in the opposite direction.
The replication fork
The replication fork is a structure which forms when DNA is being replicated. It is created through the action of helicase, which breaks the hydrogen bonds holding the two DNA strands together. The resulting structure has two branching "prongs", each one made up of a single strand of DNA.
Lagging strand synthesis
In DNA replication, the lagging strand is the DNA strand at the replication fork opposite to the leading strand. It is also oriented in the opposite direction when compared to the leading strand, with the 5' near the replication fork instead of the 3' end as is the case with the leading strand. When the enzyme helicase unwinds DNA, two single stranded regions of DNA (the "replication fork") form. DNA polymerase cannot build a strand in the 3' → 5' direction. This poses no problems for the leading strand, which can continuously synthesize DNA in a processive manner, but creates a problem for the lagging strand, which cannot be synthesized in the 3' → 5' direction. Thus, the lagging strand is synthesized in short segments known as Okazaki fragments. On the lagging strand, primase builds an RNA primer in short bursts. DNA polymerase is then able to use the free 3' hydroxyl group on the RNA primer to synthesize DNA in the 5' → 3' direction. The RNA fragments are then removed (different mechanisms are used in eukaryotes and prokaryotes) and new deoxyribonucleotides are added to fill the gaps where the RNA was present. DNA ligase is then able to join the deoxyribonucleotides together, completing the synthesis of the lagging strand.
Leading strand synthesis
The leading strand is defined as the DNA strand that is read in the 3' → 5' direction but synthesized in the 5'→ 3' direction, in a continuous manner. On this strand, DNA polymerase III is able to synthesize DNA using the free 3'-OH group donated by a single RNA primer (multiple RNA primers are not used) and continuous synthesis occurs in the direction in which the replication fork is moving.
Dynamics at the replication fork
The sliding clamp in all domains of life share a similar structure, and are able to interact with the various processive and non-processive DNA polymerases found in cells. In addition, the sliding clamp serves as a processivity factor. The C-terminal end of the clamps forms loops which are able to interact with other proteins involved in DNA replication (such as DNA polymerase and the clamp loader). The inner face of the clamp allows DNA to be threaded through it. The sliding clamp forms no specific interactions with DNA. There is a large 35A hole in the middle of the clamp. This allows DNA to fit through it, and water to take up the rest of the space allowing the clamp to slide along the DNA. Once the polymerase reaches the end of the template or detects double stranded DNA (see below), the sliding clamp undergoes a conformational change which releases the DNA polymerase.
The clamp loader, a multisubunit protein, is able to bind to the sliding clamp and DNA polymerase. When ATP is hydrolyzed, it loses affinity for the sliding clamp allowing DNA polymerase to bind to it. Furthermore, the sliding clamp can only be bound to a polymerase as long as single stranded DNA is being synthesized. Once the single stranded DNA runs out, the polymerase is able to bind to the a subunit on the clamp loader and move to a new position on the lagging strand. On the leading strand, DNA polymerase III associates with the clamp loader and is bound to the sliding clamp.
Recent evidence suggests that the enzymes and proteins involved in DNA replication remain stationary at the replication forks while DNA is looped out to maintain bidirectionality observed in replication. This is a result of an interaction between DNA polymerase, the sliding clamp, and the clamp loader.
DNA replication differs somewhat between eukaryotic and prokaryotic cells. Much of our knowledge of the process DNA replication was derived from the study of E. coli, while yeast has been used as a model organism for understanding eukaryotic DNA replication.
It is not known how RNA polymerase produces enough energy to carry out replication.
Mechanism of replication
Once priming of DNA is complete, DNA polymerase is loaded into the DNA and replication begins. The catalytic mechanism of DNA polymerase involves the use of two metal ions in the active site and a region in the active site that can discriminate between deoxynucleotides and ribonucleotides. The metal ions are general divalent cations that help the 3'-OH initiate a nucleophilic attack onto the alpha-phosphate of the deoxyribonucleotide and orient and stabilize the negatively-charged triphosphate on the deoxyribonucleotide. Nucleophillic attack by the 3'-OH on the alpha phosphate releases pyrophosphate, which is then subsequently hydrolyzed by inorganic pyrophosphatase into two phosphates. This hydrolysis drives DNA synthesis to completion.
Furthermore, DNA polymerase must be able to distinguish between correctly paired bases and incorrectly paired bases. This is accomplished by distinguishing Watson-Crick base pairs through the use of an active site pocket that is complementary in shape to the structure of correctly paired nucleotides. This pocket has a tyrosine residue that is able to form van der Waals interactions with the correctly paired nucleotide. In addition, double stranded DNA in the active site has a wider and shallower minor groove that permits the formation of hydrogen bonds with the third nitrogen of purine bases and the second oxygen of pyrimidine bases. Finally, the active site makes extensive hydrogen bonds with the DNA backbone. These interactions result in the DNA polymerase III closing around a correctly paired base. If a base is inserted and incorrectly paired, these interactions could not occur due to disruptions in hydrogen bonding and van der Waals interactions. The mechanism of replication is similar in eukaryotes and prokaryotes.
DNA is read in the 3' → 5' direction, relative to the parent strand, therefore, nucleotides are synthesized (or attached to the template strand) in the 5' → 3' direction, relative to the daughter strand. However, one of the parent strands of DNA is 3' → 5' and the other is 5' → 3'. To solve this, replication must proceed in opposite directions. The leading strand runs towards the replication fork and is thus synthesized in a continuous fashion, only requiring one primer. On the other hand, the lagging strand runs in the opposite direction, heading away from the replication fork, and is synthesized in a series of short fragments known as Okazaki fragments, consequently requiring many primers. The RNA primers of Okazaki fragments are subsequently degraded by RNase H and DNA Polymerase I (exonuclease), and the gap (or nicks) are filled with deoxyribonucleotides and sealed by the enzyme ligase.
DNA replication in bacteria (E.coli)
Initiation of replication and the bacterial origin
DNA replication in E. coli is bi-directional and originates at a single origin of replication (OriC). The initiation of replication is mediated by DnaA, a protein that binds to a region of the origin known as the DnaA box. In E. coli, there are 5 DnaA boxes, each of which contains a highly conserved 9-base pair consensus sequence 5' - TTATCCACA - 3'. Binding of DnaA to this region causes it to become negatively supercoiled. Following this, a region of OriC upstream of the DnaA boxes (known as DnaB boxes) melts. There are three of these regions. Each are 13 base pairs long and rich in A-T base pairs. This facilitates melting because less energy is required to break the two hydrogen bonds that form between A and T nucleotides. This region has the consensus sequence 5' - GATCTNTTNTTTT - 3. Melting of the DnaB boxes requires ATP (which is hydrolyzed by DnaA). Following melting, DnaA recruits a hexameric helicase (six DnaB proteins) to opposite ends of the melted DNA. This is where the replication fork will form. Recruitment of helicase requires six DnaC proteins, each of which is attached to one subunit of helicase. Once this complex is formed, an additional five DnaA proteins bind to the original five DnaA proteins to form five DnaA dimers. DnaC is then released, and the prepriming complex is complete. In order for DNA replication to continue, single-strand binding proteins (SSBs) are needed to prevent the single strands of DNA from forming any secondary structures and to prevent them from reannealing, and DNA gyrase is needed to relieves the stress (by creating negative supercoils) created by the action of DnaB helicase. The unwinding of DNA by DnaB helicase allows for primase (DnaG) an RNA polymerase to prime each DNA template so that DNA synthesis can begin.
Termination of replication
Termination of DNA replication in E. coli is completed through the use of termination sequences and the Tus protein. These sequences allow the two replication forks to pass through in only one direction, but not the other. In order to slow down and stop the movement of the replication fork in the termination region of the E. coli chromosome, the Tus protein is required. This protein binds to the termination sites, and prevents DnaB from displacing DNA strands. However, these sequences are not required for termination of replication.
Regulation of replication
Regulation of DNA replication is achieved through several mechanisms. Mechanisms of regulation involve the ratio of ATP to ADP, the ratio of DnaA protein to DnaA boxes and the hemimethylation and sequestering of OriC. The ratio of ATP to ADP indicates that the cell has reached a specific size and is ready to divide. This "signal" occurs because in a rich medium, the cell will grow quickly and will have a lot of excess ATP. Furthermore, DnaA binds equally well to ATP or ADP, but only the DnaA-ATP complex is able to initiate replication. Thus, in a fast growing cell, there will be more DnaA-ATP than DnaA-ADP.
Another mode of regulation involves the levels of DnaA in the cell. 5 DnaA-DnaA dimers are needed to initiate replication. Thus, the ratio of DnaA to the number of DnaA boxes in the cell is important. After DNA replication is complete, this number is halved and replication cannot occur until the levels of DnaA protein increase.
Finally, upon completion of DNA replication, DNA is sequestered to a membrane-binding protein called SeqA. This protein binds to hemimethylated GATC DNA sequences. This 4-base pair sequence occurs 11 times in OriC. Only the parent strand is methylated upon completion of DNA synthesis. DAM methyltransferase methylates the adenine residues in the newly synthesized strand of DNA only if it is not bound to SeqA. The importance of this form of regulation is twofold: 1) OriC becomes inaccessible to DnaA and 2) DnaA binds better to fully methylated DNA than hemimethylated DNA.
Rolling circle replication
Rolling circle replication is initiated by an initiator protein encoded by the plasmid or bacteriophage DNA. This protein is able to nick one strand of the double-stranded, circular DNA molecule at a site called the double-strand origin (DSO) and remains bound to the 5'-PO4 end of the nicked strand. The free 3'-OH end is released and can serve as a primer for DNA synthesis by DNA polymerase III. Using the unnicked strand as a template, replication proceeds around the circular DNA molecule, displacing the nicked strand as single-stranded DNA.
Continued DNA synthesis can produce multiple single-stranded linear copies of the original DNA in a continuous head-to-tail series. These linear copies can be converted to double-stranded circular molecules through the following process: First, the initiator protein makes another nick to terminate synthesis of the first (leading) strand. RNA polymerase and DNA polymerase III then replicate the single-stranded origin (SSO) DNA to make another double-stranded circle. DNA polymerase I removes the primer, replacing it with DNA, and DNA ligase joins the ends to make another molecule of double-stranded circular DNA.
A striking feature of rolling circle replication is the uncoupling of the replication of the two strands of the DNA molecule. In contrast to common modes of DNA replication where both the parental DNA strands are replicated simultaneously, in rolling circle replication one strand is replicated first (which protrudes after being displaced, giving the characteristic appearance) and the second strand is replicated after completion of the first one.
Rolling circle replication has found wide uses in academic research and biotechnology, and has been successfully used for amplification of DNA from very small amounts of starting material.
Plasmid replication: Origin and regulation
The regulation of plasmids differs considerably from the regulation of chromosomal replication. However, the machinery involved in the replication of plasmids is similar to that of chromosomal replication. The plasmid origin is commonly termed OriV, and at this site DNA replication is initiated. The ori region of plasmids, unlike that found on the host chromosome, contain the genes required for its replication. In addition, the ori region determines the host range. Plasmids carrying the ColE1 origin have a narrow host range and are restricted to the relatives of E. coli. Plasmids of utilizing the RK2 ori and ones that replicate using rolling circle replication have a broad host range and are compatible with gram positive and gram negative bacteria. Another important characteristic of the ori region is the regulation of plasmid copy number. Generally, high copy number plasmids have mechanisms that inhibit the initiation of replication. Regulation of plasmids based on the ColE1 origin, a high copy number origin, require an antisense RNA. A gene close to the origin, RNAII is transcribed and the 3'-OH of the transcript primes the origin only if it is cleaved by RNase H. Transcription of RNAI, the antisense RNA, inhibits the RNAII from priming the DNA because it prevents the formation of the RNA-DNA hybrid recognized by RNase H.
Eukaryotic DNA replication
Although the mechanisms of DNA synthesis in eukaryotes and prokaryotes are similar, DNA replication in eukaryotes is much more complicated. Though DNA synthesis in prokaryotes such as E. coli is regulated, DNA replication is initiated before the end of the cell cycle. Eukaryotic cells can only initiate DNA replication at a specific point in the cell cycle known as the S phase.
DNA replication in eukaryotes occurs only in the S phase of the cell cycle. However, pre-initiation occurs in G1. Due to the sheer size of chromosomes in eukaryotes, eukaryotic chromosomes contain multiple origins of replication. Some origins are well characterized, such as the autonomously replicating sequences (ARS) of yeast while other eukaryotic origins, particularly those in metazoa, can be found in spans of thousands of base pairs. However, the assembly and initiation of replication is similar in both the protozoa and metazoa. You can find detailed information on Yeast ARS elements on this website http://www.oridb.org/index.php
Initiation of replication
The first step in the eukaryotic DNA replication is the formation of the pre-initiation replication complex (the pre-RC). The formation of this complex occurs in two stages. The first stage requires that there is no cyclin-dependent kinase (CDK) activity. This can only occur in early G1. The formation of the pre-RC is known as licensing, but a licensed pre-RC cannot initiate replication. Initiation of replication can only occur during the S-phase. Thus, the separation of licensing and activation ensures that the origin can only fire once per cell cycle.
DNA replication in eukaryotes is not very well characterized. However, researchers believe that it begins with the binding of the origin recognition complex (ORC) to the origin. This complex is a hexamer of related proteins and remains bound to the origin, even after DNA replication occurs. Furthermore, ORC is the functional analogue of DnaA. Following the binding of ORC to the origin, Cdc6/Cdc18 and Cdt1 coordinate the loading of the minichromosome maintenance functions (MCM) complex to the origin by first binding to ORC and then binding to the MCM complex. The MCM complex is thought to be the major DNA helicase in eukaryotic organisms, and is a hexamer (mcm2-7). Once binding of MCM occurs, a fully licensed pre-RC exists.
Activation of the complex occurs in S-phase and requires Cdk2-Cyclin E and Ddk. The activation process begins with the addition of Mcm10 to the pre-RC, which displaces Cdt1. Following this, Ddk phosphorylates Mcm3-7, which activates the helicase. It is believed that ORC and Cdc6/18 are phosphorylated by Cdk2-Cyclin E. Ddk and the Cdk complex then recruits another protein called Cdc45, which then recruits all of the DNA replication proteins to the replication fork. At this stage the origin fires and DNA synthesis begins.
Regulation of replication
Activation of a new round of replication is prevented through the actions of the cyclin-dependent kinases and a protein known as geminin. Geminin binds to Cdt1 and sequesters it. It is a periodic protein that first appears in S-phase and is degraded in late M-phase, possibly through the action of the anaphase promoting complex (APC). In addition, phosphorylation of Cdc6/18 prevent it from binding to the ORC (thus inhibiting loading of the MCM complex) while the phosphorylation of ORC remains unclear. Cells in the G0 stage of the cell cycle are prevented from initiating a round of replication because the MCM proteins are not expressed. Researchers believe that termination of DNA replication in eukaryotes occurs when two replication forks encounter each other. It is the first phase of translation.
Numerous polymerases can replicate DNA in eukaryotic cells. Currently, six families of polymerases (A, B, C, D, X, Y) have been discovered. At least four different types of DNA polymerases are involved in the replication of DNA in animal cells (POLA, POLG, POLD1 and POLE). POL1 functions by extending the primer in the 5' -> 3' . However, it lacks the ability to proofread DNA. POLD1 has a proofreading ability and is able to replicate the entire length of a template only when associated with PCNA. POLE is able to replicate the entire length of a template in the absence of PCNA and is able to proofread DNA while POLG replicates mitochondrial DNA via the D-Loop mechanism of DNA replication. All primers are removed by RNaseH1 and Flap Endonuclease I. The general mechanisms of DNA replication on the leading and lagging strand, however, are the same as to those found in prokaryotic cells.
Eukaryotic DNA replication takes place in discrete sites in the nucleus. These replication foci contain replication machinery (proteins involved in DNA replication)
A unique problem that occurs during the replication of linear chromosomes is chromosome shortening. Chromosome shortening occurs when the primer at the 5' end of the lagging strand is degraded. Because DNA polymerase cannot add new nucleotides to the 5' end of DNA (there is no place for a new primer), the ends would shorten after each round of replication. However, in most replicating cells a small amount of telomerase is present, and this enzyme extends the ends of the chromosomes so that this problem does not occur. This extension occurs when the telomerase enzyme binds to a section of DNA on the 3' end and extends it using the normal replication machinery. This then allows for a primer to bind so that the complementary strand can be extended by normal lagging strand synthesis. Finally, telomeres must be capped by a protein to prevent chromosomal instability.
Replication of mitochondrial DNA and chloroplast DNA
D-loop replication is a process by which chloroplasts and mitochondria replicate their genetic material. An important component of understanding D-loop replication is that chloroplasts and mitochondria have a single circular chromosome like bacteria instead of the linear chromosomes found in eukaryotes. Replication begins at the leading strand origin. The leading strand is replicated in one direction and after about 2/3 of the chromosome's leading strand has been replicated, the lagging strand origin is exposed. Replication of the lagging strand is 1/3 complete when the replication of the leading strand is finished. The resulting structure looks like the letter D, and this occurs because the synthesis of the leading strand displaces the lagging strand.
The D-loop region is important for phylogeographic studies. Because the region does not code for any genes, it is free to vary with only a few selective limitations on size and heavy/light strand factors. The mutation rate is among the fastest of anywhere in either the nuclear or mitochondrial genomes in animals. Mutations in the D-loop can effectively track recent and rapid evolutionary changes such as within species and among very closely related species.
DNA replication in archaea
Understanding DNA replication in the archaea is just beginning, and it is the goal of this section to provide a basic understanding of how DNA replication occurs in these unique prokaryotes. In addition, this section aims to provide a comparison between the three domains.
Origin of replication
The origins of archaea are AT rich, and generally have one or more AT stretches. In addition, long inverted repeats flank both ends of the origin, and are thought to be important in the initiation process and may serve a function similar to the DnaA boxes in the eubacteria. The genes that code for Cdc6/Orc1 are also located near the origin region, and this arrangement may allow these proteins to associate with the origin as soon as they are translated.
Initiation of replication begins with the binding of Cdc6/Orc1 to the origin in an ATP independent manner. This complex is constitutively expressed and most likely forms the origin binding proteins (OBP). Due to their similarity with proteins involved in eukaryotic initiation, Cdc6/Orc1 may be involved in helicase loading in archaea. However, other evidence suggests that this complex may function as an initiator and create a sufficiently large replication bubble to allow the helicase (Mcm) to load without the presence of a loader. Once loading of this complex is complete, however, the DNA melts, and helicase can be loaded.
In archaea, a hexameric protein known as the Mcm complex may function as the primary helicase. This protein is homologous to the eukaryotic Mcm complex. In archaea, there is no cdt1 homologue, and the helicase may be able to self-assemble at an archaeal origin without the need for a helicase loader. These proteins possess 3'->5' helicase capability.
Single stranded binding protein (SSB) prevents exposed single stranded DNA from forming any secondary structures or reannealing. This complex is able to recruit primase, DNA polymerase and other replication machinery. The mechanisms of this process are similar to those in eukaryotes.
Similarities to eukaryotic and eubacterial replication
- ORC is homologous to Cdc6/Orc1 in archaea and may represent the ancestral state of the eukaryotic pre-RC.
- A homologous Mcm protein exists between eukarya and archaea
- The structure of Cdc6/Orc1 resembles the tertiary structure of DnaA in eubacteria
- Both eukaryotic and archaeal helicases possess 3'->5' helicase capability
- Archaeal SSB is similar to RPA
- ↑ Berg J, Tymoczko JL, Stryer L (2006). Biochemistry, 6th ed., San Francisco: W. H. Freeman. ISBN 0716787245.
- ↑ Lehninger et al., Albert; Nelson, D.L., Cox, M.M. [1982n]. "24 DNA Metabolism", Principles of Biochemistry, Second, 33 Irving Place, New York, NY 10003: Worth Publishers, 818-829. ISBN 0-87901-500-4.
- ↑ DnaA protein binding to individual DnaA boxes in the Escherichia coli replication origin, oriC. C Weigel, A Schmidt, B Rückert, R Lurz, and W Messer
- ↑ DnaA protein/DNA interaction. Modulation of the recognition sequence
- ↑ Effects of Escherichia coli SSB protein on the single-stranded DNA-dependent ATPase activity of Escherichia coli RecA protein. Evidence that SSB protein facilitates the binding of RecA protein to regions of secondary structure within single-stranded DNA.
- ↑ http://www.oridb.org/index.php
- ↑ http://www.mdc-berlin.de/cardosolab/publications/Leonhardt_2000b.pdf
- Voet and Voet. Biochemistry, Third Edition (2004). ISBN 0-471-19350-X. Wiley International Edition.
- Watson, Baker, Bell, Gann, Levine, Losick. Molecular Biology of the Gene, Fifth Edition (2003). ISBN 0-8053-4635-X. Pearson/Benjamin Cummings Publishing.
- Weem, Minka Peeters. International Baccalaureate, Biology, Second Edition (2001). IBID Press, Box 9, Camberwell, 3124, Australia.
- Russell, P. J. 2002. iGenetics. Benjamin Cummings, San Francisco.
- Snyder and Champness. Molecular Genetics of Bacteria, Second Edition (2003). ISBN 1-55581-204-X. ASM Press.
- Bell and Dutta. 2002. Annu. Rev. Biochem 71:333–74.
- Barry, E. R., & Bell, S. D. (2006). DNA replication in the archaea. Microbiology and molecular biology reviews : MMBR, 70(4), 876-887.
- Kelman, L. M., & Kelman, Z. (2003). Archaea: An archetype for replication initiation studies? Molecular microbiology, 48(3), 605-615.
- Heinrich Leonhardt,* Hans-Peter Rahn,* Peter Weinzierl, Anje Sporbert,* Thomas Cremer, Daniele Zink, and M. Cristina Cardoso, "Dynamics of DNA Replication Factories in Living Cells", The Journal of Cell Biology, Volume 149, Number 2, April 17, 2000 271–279
- DNA Workshop
- WEHI-TV - DNA Replication video Detailed DNA replication animation from different angles with description below.
- Breakfast of Champions Does Replication Creative primer on the process from the Science Creative Quarterly
- Basic Polymerase Chain Reaction Protocol
- Animated Biology
- DNA makes DNA (Flash Animation)
- DNA replication info page by George Kakaris, Biologist MSc in Applied Genetics and Biotechnology
- Reference website on eukaryotic DNA replication
- Molecular visualization of DNA Replication
ar:تناسخ الحمض النووي الريبي منقوص الأكسجين de:Replikation et:DNA replikatsioonel:Αντιγραφή του DNAko:DNA 복제 id:Replikasi DNA he:שכפול DNA nl:Replicatie (DNA)no:DNA-replikeringsimple:DNA replication sr:Репликација ДНК sv:Replikationuk:Реплікація ДНК ur:ڈی این اے تنسخ
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies | http://www.wikidoc.org/index.php/DNA_replication | 13 |
17 | Ribonucleic acid or RNA is a polymer or chain of nucleotide units, each comprising a nitrogenous base (adenine, cytosine, guanine, or uracil), a five-carbon sugar (ribose), and a phosphate group. The sugar and phosphate groups form the polymer's backbone, while the nitrogenous bases extending from the backbone provide RNA's distinctive properties.
In living cells, RNA in different configurations fulfills several important roles in the process of translating genetic information from deoxyribonucleic acid (DNA) into proteins. One type of RNA (messenger(m) RNA) acts as a messenger between DNA and the protein synthesis complexes known as ribosomes; a second type (ribosomal(r) RNA) forms vital portions of the structure of ribosomes; a third type (transfer(t) RNA) is an essential guide to deliver the appropriate protein building blocks, amino acids, to the ribosome; and other types of RNA, microRNAs (miRNAs) play a role in regulating gene expression, while small nuclear(sn) RNA helps with assuring that mRNA contains no nucleotide units that would lead to formation of a faulty protein. RNA also serves as a genetic blueprint for certain viruses, and some RNA molecules (called ribozymes) are also involved in the catalysis of biochemical reactions.
RNA is very similar to DNA, but differs in a few important structural details. RNA is usually single stranded, while DNA naturally seeks its stable form as a double stranded molecule. RNA nucleotides contain ribose while DNA nucleotides contain the closely related sugar deoxyribose. Furthermore, RNA uses the nucleotide uracil in its composition, instead of the thymine that is present in DNA. RNA is transcribed from DNA by enzymes called RNA polymerases and is generally further processed by other enzymes, some of them guided by non-coding RNAs.
Single-stranded RNA is similar to the protein polymer in its natural propensity to fold back and double up with itself in complex ways assuming a variety of biologically useful configurations.
Chemical and stereochemical structure
A nucleotide is a chemical compound comprising three components: a nitrogen-containing base, a pentose (five-carbon) sugar, and one or more phosphate groups. The nitrogen-containing base of a nucleotide (also called the nucleobase) is typically a derivative of either purine or pyrimidine. The most common nucleotide bases are the purines adenine and guanine and the pyrimidines cytosine and thymine (or uracil in RNA).
Nucleic acids are polymers of repeating units (called monomers). Specifically, they often comprise long chains of nucleotide monomers connected by covalent chemical bonds. RNA molecules may comprise as few as 75 nucleotides or more than 5,000 nucleotides, while a DNA molecule may comprise more than 1,000,000 nucleotide units.
Ribose is an aldopentose, which means a pentose sugar with an aldehyde functional group in position one. An aldehyde group comprises a carbon atom bonded to a hydrogen atom and double-bonded to an oxygen atom (chemical formula O=CH-). Ribose forms a five-member ring with four carbon atoms and one oxygen. Hydroxyl (-OH) groups are attached to three of the carbons. The fourth carbon in the ring (one of the carbon atoms adjacent to the oxygen) has attached to it the fifth carbon atom and a hydroxyl group.
There are also numerous modified bases and sugars found in RNA that serve many different roles. Pseudouridine (Ψ), in which the linkage between uracil and ribose is changed from a C–N bond to a C–C bond, and ribothymidine (T), are found in various places (most notably in the TΨC loop of tRNA). Another notable modified base is hypoxanthine (a deaminated guanine base whose nucleoside is called inosine). Inosine plays a key role in the Wobble Hypothesis of the genetic code. There are nearly 100 other naturally occurring modified nucleosides, of which pseudouridine and nucleosides with 2'-O-methylribose are by far the most common. The specific roles of many of these modifications in RNA are not fully understood. However, it is notable that in ribosomal RNA, many of the post-translational modifications occur in highly functional regions, such as the peptidyl transferase center and the subunit interface, implying that they are important for normal function.
The most important structural feature of RNA that distinguishes it from DNA is the presence of a hydroxyl group at the 2'-position of the ribose sugar. The presence of this functional group enforces the C3'-endo sugar conformation (as opposed to the C2'-endo conformation of the deoxyribose sugar in DNA) that causes the helix to adopt the A-form geometry rather than the B-form most commonly observed in DNA. This results in a very deep and narrow major groove and a shallow and wide minor groove. A second consequence of the presence of the 2'-hydroxyl group is that in conformationally flexible regions of an RNA molecule (that is, not involved in formation of a double helix), it can chemically attack the adjacent phosphodiester bond to cleave the backbone.
Comparison with DNA
The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The main role of DNA is the long-term storage of genetic information. DNA is often compared to a blueprint, since it contains instructions for constructing other components of the cell, such as proteins and RNA molecules. The DNA segments that carry genetic information are called genes, but other DNA sequences have structural purposes or are involved in regulating the expression of genetic information. RNA, also, may serve more than one purpose, but it is most commonly identified as the intermediate between the DNA blueprint and the actual workings of the cell, serving as the template for the synthesis of proteins from the genetic information stored in DNA.
RNA and DNA differ in three main ways.
First, unlike DNA which is double-stranded, RNA is intrinsically a single-stranded molecule in most of its biological roles and has a much shorter chain of nucleotides. (While RNA is usually single-stranded, the RNA molecule also quite commonly forms double-helical regions where a given strand has folded back on itself. Double-stranded RNA is found also in certain viruses.)
Secondly, while DNA contains deoxyribose, RNA contains ribose. There is no hydroxyl group attached to the pentose ring in the 2' position in DNA, whereas RNA has two hydroxyl groups. These hydroxyl groups make RNA less stable than DNA because it is more prone to hydrolysis. (“Deoxy” simply indicates that the sugar lacks an oxygen atom present in ribose, the parent compound.)
Most biologically active RNAs, including tRNA, rRNA, snRNAs, and other non-coding RNAs (such as the signal recognition particle(SRP) RNAs), contain extensively base paired regions that have folded together to form double stranded helices. Structural analysis of these RNAs reveals that they are highly structured with tremendous variety with collections of short helices packed together into structures much more akin to proteins than to DNA, which is usually limited to long double-stranded helices. Through such a variety of structures, RNAs can achieve chemical catalysis, like enzymes. For instance, determination of the structure of the ribosome—an enzyme that catalyzes peptide bond formation—revealed that its active site is composed entirely of RNA.
Synthesis of RNA is usually catalyzed by an enzyme, RNA polymerase, using DNA as a template. Initiation of synthesis begins with the binding of the enzyme to a promoter sequence in the DNA (usually found "upstream" of a gene). The DNA double helix is unwound by the helicase activity of the enzyme. The enzyme then progresses along the template strand in the 3’ -> 5’ direction, synthesizing a complementary RNA molecule with elongation occurring in the 5’ -> 3’ direction. The DNA sequence also dictates where termination of RNA synthesis will occur (Nudler and Gottesman 2002).
There are also a number of RNA-dependent RNA polymerases as well that use RNA as their template for synthesis of a new strand of RNA. For instance, a number of RNA viruses (such as poliovirus) use this type of enzyme to replicate their genetic material (Hansen et al. 1997). Also, it is known that RNA-dependent RNA polymerases are required for the RNA interference pathway in many organisms (Ahlquist 2002).
RNA's great variety of possible structures and chemical properties permits it to perform a much greater diversity of roles than in the cell than DNA. Three principal types of RNA are involved in protein synthesis:
- Messenger RNA (mRNA) serves as the template for the synthesis of a protein. It carries information from DNA to the ribosome.
- Transfer RNA (tRNA) is a small chain of nucleotides that transfers a specific amino acid to a growing polypeptide chain at the ribosomal site of synthesis. It pairs the amino acid to the appropriate three-nucleotide codon on the mRNA molecule.
- Ribosomal RNA (rRNA) molecules are extremely abundant and make up at least 80 percent of the RNA molecules found in a typical eukaryotic cell. In the cytoplasm, usually three or four rRNA molecules combine with many proteins to perform a structural and essential catalytic role, as components of the ribosome.
RNA also may serve as a catalyst for reactions and as a genetic blueprint, rather than DNA, in various viruses. Some RNA, including tRNA and rRNA, is non-coding in that it is not translated into proteins.
Messenger RNA (mRNA)
Messenger RNA is RNA that carries information from DNA to the ribosome sites of protein synthesis in the cell. In eukaryotic cells, once mRNA has been transcribed from DNA, it is "processed" before being exported from the nucleus into the cytoplasm, where it is bound to ribosomes and translated into its corresponding protein form with the help of tRNA. In prokaryotic cells, which do not have nucleus and cytoplasm compartments, mRNA can bind to ribosomes while it is being transcribed from DNA. After a certain amount of time the message degrades into its component nucleotides, usually with the assistance of ribonucleases.
RNA genes (also known as non-coding RNA or small RNA) are genes that encode RNA that is not translated into a protein. The most prominent examples of RNA genes are those coding for transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. Two other groups of non-coding RNA are microRNAs (miRNA) which regulate the expression of genes through a process called RNA interference (RNAi), and small nuclear RNAs (snRNA), a diverse class that includes for example the RNAs that form spliceosomes that excise introns from pre-mRNA (Berg et al. 2002).
Transfer RNA (tRNA)
Transfer RNA is a small RNA chain of about 74-95 nucleotides that transfers a specific amino acid to a growing polypeptide chain at the ribosomal site of protein synthesis, during translation. It has sites for amino-acid attachment and an anticodon region for codon recognition that binds to a specific sequence on the messenger RNA chain through hydrogen bonding. It is a type of non-coding RNA.
Ribosomal RNA (rRNA)
Ribosomal RNA is the catalytic component of the ribosomes, the protein synthesis factories in the cell. Eukaryotic ribosomes contain four different rRNA molecules: 18S, 5.8S, 28S, and 5S rRNA. Three of the rRNA molecules are synthesized in the nucleolus, and one is synthesized elsewhere. rRNA molecules are extremely abundant and make up at least 80 percent of the RNA molecules found in a typical eukaryotic cell.
Although RNA contains only four bases, in comparison to the twenty-odd amino acids commonly found in proteins, certain RNAs (called ribozymes) are still able to catalyze chemical reactions. These include cutting and ligating other RNA molecules, and also the catalysis of peptide bond formation in the ribosome.
Genetic blueprint in some viruses
Some viruses contain either single-stranded or double-stranded RNA as their source of genetic information. Retroviruses, for example, store their genetic information as RNA, though they replicate in their hosts via a DNA intermediate. Once in the host's cell, the RNA strands undergo reverse transcription to DNA in the cytosol and are integrated into the host's genome. Human immunodeficiency virus (or HIV) is a retrovirus thought to cause acquired immune deficiency syndrome (AIDS), a condition in which the human immune system begins to fail, leading to life-threatening opportunistic infections.
Double-stranded RNA (dsRNA) is RNA with two complementary strands, similar to the DNA found in all cells. dsRNA forms the genetic material of some viruses called dsRNA viruses. In eukaryotes, long RNA such as viral RNA can trigger RNA interference, where short dsRNA molecules called siRNAs (small interfering RNAs) can cause enzymes to break down specific mRNAs or silence the expression of genes. siRNA can also increase the transcription of a gene, a process called RNA activation (Doran 2007). siRNA is often confused with miRNA; siRNAs are double-stranded, whereas miRNAs are single-stranded.
RNA world hypothesis
The RNA world hypothesis proposes that the earliest forms of life relied on RNA both to carry genetic information (like DNA does now) and to catalyze biochemical reactions like an enzyme. According to this hypothesis, descendants of these early lifeforms gradually integrated DNA and proteins into their metabolism.
In the 1980s, scientists discovered that certain RNA molecules (called ribozymes) may function as enzymes, whereas previously only proteins were believed to have catalytic ability. Many natural ribozymes catalyze either their own cleavage or the cleavage of other RNAs, but they have also been found to catalyze the aminotransferase activity of the ribosome.
The discovery of ribozymes provides a possible explanation for how early RNA molecules might have first catalyzed their own replication and developed a range of enzymatic activities. Known as the RNA world hypothesis, this explanation posits that RNA evolved before either DNA or proteins from free-floating nucleotides in the early "primordial soup." In their function as enzymes, RNA molecules might have begun to catalyze the synthesis of proteins, which are more versatile than RNA, from amino acid molecules. Next, DNA might have been formed by reverse transcription of RNA, with DNA eventually replacing RNA as the storage form of genetic material. Although there are remaining difficulties with the RNA world hypothesis, it remains as a possible key to understanding the origin and development of the multi-functional nature of nucleic acids, the interconnectedness of life, and its common origins.
RNA secondary structures
The functional form of single stranded RNA molecules, just like proteins, frequently requires a specific tertiary structure. The scaffold for this structure is provided by secondary structural elements, which arise through the formation of hydrogen bonds within the interfolded molecule. This leads to several recognizable "domains" of secondary structure like hairpin loops, bulges, and internal loops. The secondary structure of RNA molecules can be predicted computationally by calculating the minimum free energies (MFE) structure for all different combinations of hydrogen bondings and domains (Mathews et al. 2004). There has been a significant amount of research directed at the RNA structure prediction problem.
Nucleic acids were discovered in 1868 by Johann Friedrich Miescher (1844-1895), who called the material 'nuclein' since it was found in the nucleus. It was later discovered that prokaryotic cells, which do not have a nucleus, also contain nucleic acids.
The role of RNA in protein synthesis had been suspected since 1939, based on experiments carried out by Torbjörn Caspersson, Jean Brachet, and Jack Schultz. Hubert Chantrenne elucidated the messenger role played by RNA in the synthesis of proteins in ribosomes. Finally, Severo Ochoa discovered RNA, winning Ochoa the 1959 Nobel Prize for Medicine. The sequence of the 77 nucleotides of a yeast RNA was found by Robert W. Holley in 1964, winning Holley the 1968 Nobel Prize for Medicine. In 1976, Walter Fiers and his team at the University of Ghent determined the complete nucleotide sequence of bacteriophage MS2-RNA (Fiers et al. 1976).
List of RNA types
|mRNA||Codes for protein||All cells|
|snRNA||RNA modification||All cells|
|snoRNA||RNA modification||All cells|
|piRNA||Gene regulation||Animal germline cells|
|Antisense mRNA||Preventing translation||Bacteria|
|SRP RNA||mRNA tagging for export||All cells|
In addition, the genome of many types of viruses consists of RNA, namely:
- Double-stranded RNA viruses
- Positive-sense RNA viruses
- Negative-sense RNA viruses
- Satellite viruses
- Ahlquist, P. 2002. RNA-dependent RNA polymerases, viruses, and RNA silencing. Science 296(5571): 1270-1273.
- Berg, J. M., J. L. Tymoczko, and L. Stryer. 2002. Biochemistry, 5th Edition. WH Freeman and Company. ISBN 0716746840.
- Doran, G. 2007. RNAi – Is one suffix sufficient? Journal of RNAi and Gene Silencing 3(1): 217-219. Retrieved December 7, 2007.
- Fiers W et al. 1976. Complete nucleotide-sequence of bacteriophage MS2-RNA: Primary and secondary structure of replicase gene. Nature 260: 500-507.
- Hansen, J. L., A. M. Long, and S. C. Schultz. 1997. Structure of the RNA-dependent RNA polymerase of poliovirus. Structure 5(8): 1109-1122. Retrieved December 7, 2007.
- Mathews, D. H., M. D. Disney, J. L. Childs, S. J. Schroeder, M. Zuker, and D. H. Turner. 2004. Incorporating chemical modification constraints into a dynamic programming algorithm for prediction of RNA secondary structure. Proc. Natl. Acad. Sci. U. S. A. 101(19): 7287-7292. Retrieved December 6, 2007.
- Nudler, E., and M. E. Gottesman. 2002. Transcription termination and anti-termination in E. coli. Genes to Cells 7: 755-768. Retrieved December 7, 2007.
- RNA World website. Retrieved December 8, 2007.
- Nucleic Acid Database Images of DNA, RNA and complexes. Retrieved December 8, 2007.
- RNAJunction Database: Extracted atomic models of RNA junction and kissing loop structures. Retrieved December 8, 2007.
|Nucleic acids edit|
|Nucleobases: Adenine - Thymine - Uracil - Guanine - Cytosine - Purine - Pyrimidine|
|Nucleosides: Adenosine - Uridine - Guanosine - Cytidine - Deoxyadenosine - Thymidine - Deoxyguanosine - Deoxycytidine|
|Nucleotides: AMP - UMP - GMP - CMP - ADP - UDP - GDP - CDP - ATP - UTP - GTP - CTP - cAMP - cGMP|
|Deoxynucleotides: dAMP - dTMP - dUMP - dGMP - dCMP - dADP - dTDP - dUDP - dGDP - dCDP - dATP - dTTP - dUTP - dGTP - dCTP|
|Nucleic acids: DNA - RNA - LNA - PNA - mRNA - ncRNA - miRNA - rRNA - siRNA - tRNA - mtDNA - Oligonucleotide|
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/RNA | 13 |
15 | Warning: the HTML version of this document is generated from
Latex and may contain translation errors. In
particular, some mathematical expressions are not translated correctly.
3.1 Function calls
You have already seen one example of a function call:
The name of the function is type, and it displays the type of a value or variable. The value or variable, which is called the argument of the function, has to be enclosed in parentheses. It is common to say that a function "takes" an argument and "returns" a result. The result is called the return value.
Instead of printing the return value, we could assign it to a variable:
>>> betty = type("32")
As another example, the id function takes a value or a variable and returns an integer that acts as a unique identifier for the value:
3.2 Type conversion
Python provides a collection of built-in functions that convert values from one type to another. The int function takes any value and converts it to an integer, if possible, or complains otherwise:
int can also convert floating-point values to integers, but remember that it truncates the fractional part:
The float function converts integers and strings to floating-point numbers:
Finally, the str function converts to type string:
It may seem odd that Python distinguishes the integer value 1 from the floating-point value 1.0. They may represent the same number, but they belong to different types. The reason is that they are represented differently inside the computer.
3.3 Type coercion
Now that we can convert between types, we have another way to deal with integer division. Returning to the example from the previous chapter, suppose we want to calculate the fraction of an hour that has elapsed. The most obvious expression, minute / 60, does integer arithmetic, so the result is always 0, even at 59 minutes past the hour.
One solution is to convert minute to floating-point and do floating-point division:
>>> minute = 59
Alternatively, we can take advantage of the rules for automatic type conversion, which is called type coercion. For the mathematical operators, if either operand is a float, the other is automatically converted to a float:
>>> minute = 59
3.4 Math functions
In mathematics, you have probably seen functions like sin and log, and you have learned to evaluate expressions like sin(pi/2) and log(1/x). First, you evaluate the expression in parentheses (the argument). For example, pi/2 is approximately 1.571, and 1/x is 0.1 (if x happens to be 10.0).
Then, you evaluate the function itself, either by looking it up in a table or by performing various computations. The sin of 1.571 is 1, and the log of 0.1 is -1 (assuming that log indicates the logarithm base 10).
This process can be applied repeatedly to evaluate more complicated expressions like log(1/sin(pi/2)). First, you evaluate the argument of the innermost function, then evaluate the function, and so on.
Python has a math module that provides most of the familiar mathematical functions. A module is a file that contains a collection of related functions grouped together.
Before we can use the functions from a module, we have to import them:
>>> import math
To call one of the functions, we have to specify the name of the module and the name of the function, separated by a dot, also known as a period. This format is called dot notation.
>>> decibel = math.log10 (17.0)
The first statement sets decibel to the logarithm of 17, base 10. There is also a function called log that takes logarithm base e.
The third statement finds the sine of the value of the variable angle. sin and the other trigonometric functions (cos, tan, etc.) take arguments in radians. To convert from degrees to radians, divide by 360 and multiply by 2*pi. For example, to find the sine of 45 degrees, first calculate the angle in radians and then take the sine:
>>> degrees = 45
The constant pi is also part of the math module. If you know your geometry, you can check the previous result by comparing it to the square root of two divided by two:
>>> math.sqrt(2) / 2.0
Just as with mathematical functions, Python functions can be composed, meaning that you use one expression as part of another. For example, you can use any expression as an argument to a function:
>>> x = math.cos(angle + math.pi/2)
This statement takes the value of pi, divides it by 2, and adds the result to the value of angle. The sum is then passed as an argument to the cos function.
You can also take the result of one function and pass it as an argument to another:
>>> x = math.exp(math.log(10.0))
3.6 Adding new functions
So far, we have only been using the functions that come with Python, but it is also possible to add new functions. Creating new functions to solve your particular problems is one of the most useful things about a general-purpose programming language.
In the context of programming, a function is a named sequence of statements that performs a desired operation. This operation is specified in a function definition. The functions we have been using so far have been defined for us, and these definitions have been hidden. This is a good thing, because it allows us to use the functions without worrying about the details of their definitions.
The syntax for a function definition is:
def NAME( LIST OF PARAMETERS ):
You can make up any names you want for the functions you create, except that you can't use a name that is a Python keyword. The list of parameters specifies what information, if any, you have to provide in order to use the new function.
There can be any number of statements inside the function, but they have to be indented from the left margin. In the examples in this book, we will use an indentation of two spaces.
The first couple of functions we are going to write have no parameters, so the syntax looks like this:
This function is named newLine. The empty parentheses indicate that it has no parameters. It contains only a single statement, which outputs a newline character. (That's what happens when you use a print command without any arguments.)
The syntax for calling the new function is the same as the syntax for built-in functions:
print "First Line."
The output of this program is:
Notice the extra space between the two lines. What if we wanted more space between the lines? We could call the same function repeatedly:
print "First Line."
Or we could write a new function named threeLines that prints three new lines:
This function contains three statements, all of which are indented by two spaces. Since the next statement is not indented, Python knows that it is not part of the function.
You should notice a few things about this program:
So far, it may not be clear why it is worth the trouble to create all of these new functions. Actually, there are a lot of reasons, but this example demonstrates two:
As an exercise, write a function called nineLines that uses threeLines to print nine blank lines. How would you print twenty-seven new lines?
3.7 Definitions and use
Pulling together the code fragments from Section 3.6, the whole program looks like this:
This program contains two function definitions: newLine and threeLines. Function definitions get executed just like other statements, but the effect is to create the new function. The statements inside the function do not get executed until the function is called, and the function definition generates no output.
As you might expect, you have to create a function before you can execute it. In other words, the function definition has to be executed before the first time it is called.
As an exercise, move the last three lines of this program to the top, so the function calls appear before the definitions. Run the program and see what error message you get.
As another exercise, start with the working version of the program and move the definition of newLine after the definition of threeLines. What happens when you run this program?
3.8 Flow of execution
In order to ensure that a function is defined before its first use, you have to know the order in which statements are executed, which is called the flow of execution.
Execution always begins at the first statement of the program. Statements are executed one at a time, in order from top to bottom.
Function definitions do not alter the flow of execution of the program, but remember that statements inside the function are not executed until the function is called. Although it is not common, you can define one function inside another. In this case, the inner definition isn't executed until the outer function is called.
Function calls are like a detour in the flow of execution. Instead of going to the next statement, the flow jumps to the first line of the called function, executes all the statements there, and then comes back to pick up where it left off.
That sounds simple enough, until you remember that one function can call another. While in the middle of one function, the program might have to execute the statements in another function. But while executing that new function, the program might have to execute yet another function!
Fortunately, Python is adept at keeping track of where it is, so each time a function completes, the program picks up where it left off in the function that called it. When it gets to the end of the program, it terminates.
3.9 Parameters and arguments
Some of the built-in functions you have used require arguments, the values that control how the function does its job. For example, if you want to find the sine of a number, you have to indicate what the number is. Thus, sin takes a numeric value as an argument.
Some functions take more than one argument. For example, pow takes two arguments, the base and the exponent. Inside the function, the values that are passed get assigned to variables called parameters.
Here is an example of a user-defined function that has a parameter:
This function takes a single argument and assigns it to a parameter named bruce. The value of the parameter (at this point we have no idea what it will be) is printed twice, followed by a newline. The name bruce was chosen to suggest that the name you give a parameter is up to you, but in general, you want to choose something more illustrative than bruce.
The function printTwice works for any type that can be printed:
In the first function call, the argument is a string. In the second, it's an integer. In the third, it's a float.
The same rules of composition that apply to built-in functions also apply to user-defined functions, so we can use any kind of expression as an argument for printTwice:
As usual, the expression is evaluated before the function is run, so printTwice prints SpamSpamSpamSpam SpamSpamSpamSpam instead of 'Spam'*4 'Spam'*4.
As an exercise, write a call to printTwice that does print 'Spam'*4 'Spam'*4. Hint: strings can be enclosed in either single or double quotes, and the type of quote not used to enclose the string can be used inside it as part of the string.
We can also use a variable as an argument:
>>> michael = 'Eric, the half a bee.'
Notice something very important here. The name of the variable we pass as an argument (michael) has nothing to do with the name of the parameter (bruce). It doesn't matter what the value was called back home (in the caller); here in printTwice, we call everybody bruce.
3.10 Variables and parameters are local
When you create a local variable inside a function, it only exists inside the function, and you cannot use it outside. For example:
def catTwice(part1, part2):
This function takes two arguments, concatenates them, and then prints the result twice. We can call the function with two strings:
>>> chant1 = "Pie Jesu domine, "
When catTwice terminates, the variable cat is destroyed. If we try to print it, we get an error:
>>> print cat
3.11 Stack diagrams
To keep track of which variables can be used where, it is sometimes useful to draw a stack diagram. Like state diagrams, stack diagrams show the value of each variable, but they also show the function to which each variable belongs.
Each function is represented by a frame. A frame is a box with the name of a function beside it and the parameters and variables of the function inside it. The stack diagram for the previous example looks like this:
The order of the stack shows the flow of execution. printTwice was called by catTwice, and catTwice was called by __main__, which is a special name for the topmost function. When you create a variable outside of any function, it belongs to __main__.
Each parameter refers to the same value as its corresponding argument. So, part1 has the same value as chant1, part2 has the same value as chant2, and bruce has the same value as cat.
If an error occurs during a function call, Python prints the name of the function, and the name of the function that called it, and the name of the function that called that, all the way back to __main__.
For example, if we try to access cat from within printTwice, we get a NameError:
Traceback (innermost last):
This list of functions is called a traceback. It tells you what program file the error occurred in, and what line, and what functions were executing at the time. It also shows the line of code that caused the error.
3.12 Functions with results
You might have noticed by now that some of the functions we are using, such as the math functions, yield results. Other functions, like newLine, perform an action but don't return a value. That raises some questions:
The answer to the last question is that you can write functions that yield results, and we'll do it in Chapter 5.
As an exercise, answer the other two questions by trying them out. When you have a question about what is legal or illegal in Python, a good way to find out is to ask the interpreter.
Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly. | http://www.greenteapress.com/thinkpython/thinkCSpy/html/chap03.html | 13 |
17 | Characteristics associated with informal logic are development of criteria, identification of logical fallacies, development of premises, criticism, argumentation tactics and analysis of data.
Informal logic is concerned with valid premise identification. A premise is the base which we build conclusions from. A premise is the main initial subject of a statement. Here is a statement:
“Because Henry has never told a lie, I believe him”
The premise here is “Henry has never told a lie”, our conclusion is believing Henry.
Development of Criteria
As we approach an argument, belief, or statement we must establish criteria for the premises. In the above example how might we look at the conclusion if we knew the premise was not true? What if we knew the premise was absolutely true? What if we did not not for sure either way? As you can see the validity of a premise is very important to be able to formulate true conclusions. To help us figure out if a premise is true or false we must be able to create criteria with which to judge them. Sometimes there will be many criterion or just one criteria.
To find out if a person has ever told a lie or not we might establish criteria such as:
- Asking the person questions with known answers to cross check
- Asking family and friends about the persons background and cross referencing those with facts or statements from the person
- Researching tapes and audio of the person and cross checking them against known facts
Also as we establish criteria for the premise these criteria should attempt to solidify logical absolutes about the condition of the premise.
Identifying Logical Fallacies
Logical fallacies are break downs and incorrect structuring of arguments and beliefs. Identifying logical fallacies is key to creating good arguments and beliefs. If something is structured fallaciously we cannot know if it is true or not because the logic of the statement is not sound. Remember though, that just because something is structured fallaciously does not mean it’s conclusion is false!
Informal Logic and Critical Thinking
A key skill that develops along with Informal Logic is critical thinking. This is often associated with informal logic because the practice of informal logic usually involves thinking about your own thoughts. When learning and practicing informal logic the logician should be as concerned with their own premises, possible fallacies, and logic structure as with the actual content of the discussion. In fact the content of the discussion is rather arbitrary until the logician can easily identify premises and logical fallacies. Identifying logical fallacies and premises naturally segues into critical thinking as we have to analyze how and why thoughts are formed. | http://logical-critical-thinking.com/logic/informal-logic/ | 13 |
26 | In logic and philosophy, proposition refers to either (a) the content or meaning of a meaningful declarative sentence or (b) the pattern of symbols, marks, or sounds that make up a meaningful declarative sentence. Propositions in either case are intended to be truth-bearers, that is, they are either true or false.
The existence of propositions in the former sense, as well as the existence of "meanings", is disputed. Where the concept of a "meaning" is admitted, its nature is controversial. In earlier texts writers have not always made it sufficiently clear whether they are using the term proposition in sense of the words or the "meaning" expressed by the words. To avoid the controversies and ontological implications, the term sentence is often now used instead of proposition or statement to refer to just those strings of symbols that are truth-bearers, being either true or false under an interpretation.
In mathematics, the word "proposition" is often used as a synonym for "theorem".
Common usage contrasted with philosophical usage
In common usage, different sentences express the same proposition when they have the same meaning. For example, "Snow is white" (in English) and "Schnee ist weiß" (in German) are different sentences, but they say the same thing, so they express the same proposition. Another way to express this proposition is , "Tiny crystals of frozen water are white." In common usage, this proposition is true.
Philosophy requires more careful definitions. The above definition, for example, allows "Is snow white?" and "Ist Schnee weiß?" to express the same proposition if they have the same meaning, although neither of them, being questions, could be either true or false. One such more careful definition might be that
Two meaningful declarative sentence-tokens express the same proposition if and only if they mean the same thing.
thus defining proposition
in terms of synonymity.
Unfortunately, the above definition has the result that two sentences which have the same meaning and thus express the same proposition, could have different truth-values, e.g "I am Spartacus" said by Spartacus and said by John Smith; and e.g. "It is Wednesday" said on a Wednesday and on a Thursday.
Usage in Aristotle
identifies a proposition as a sentence which affirms or denies the predicate
of a subject
. An Aristotelian proposition may take the form "All men are mortal" or "Socrates is a man." In the first example, which a mathematicial logician would call a quantified predicate
(note the difference in usage), the subject is "men" and the predicate "all are mortal". In the second example, which a mathematicial logician would call a statement
, the subject is "Socrates" and the predicate is "is a man". The second example is an atomic element
in Propositional logic
, the first example is a statement in predicate logic
. The compound proposition, "All men are mortal and Socrates is a man," combines two atomic propositions, and is considered true if and only if both parts are true.
Usage by the Logical Positivists
Often propositions are related to closed sentences
, to distinguish them from what is expressed by an open sentence, or predicate
. In this sense, propositions are statements that are either true or false
. This conception of a proposition was supported by the philosophical school of logical positivism
Some philosophers hold that other kinds of speech or actions also assert propositions. Yes-no questions are an inquiry into a proposition's truth value. Traffic signs express propositions without using speech or written language. It is also possible to use a declarative sentence to express a proposition without asserting it, as when a teacher asks a student to comment on a quote; the quote is a proposition (that is, it has a meaning) but the teacher is not asserting it. "Snow is white" expresses the proposition that snow is white without asserting it (i.e. claiming snow is white).
Propositions are also spoken of as the content of beliefs and similar intentional attitudes such as desires, preferences, and hopes. For example, "I desire that I have a new car," or "I wonder whether it will snow" (or, whether it is the case "that it will snow"). Desire, belief, and so on, are thus called propositional attitudes when they take this sort of content.
Usage by Russell
held that propositions were structured entities with objects and properties as constituents. Others have held that a proposition is the set of possible worlds/states of affairs in which it is true. One important difference between these views is that on the Russellian account, two propositions that are true in all the same states of affairs can still be differentiated. For instance, the proposition that two plus two equals four is distinct on a Russellian account from three plus three equals six. If propositions are sets of possible worlds, however, then all mathematical truths are the same set (the set of all possible worlds).
Relation to the mind
In relation to the mind, propositions are discussed primarily as they fit into propositional attitudes
. Propositional attitudes are simply attitudes characteristic of folk psychology
(belief, desire, etc.) that one can take toward a proposition (e.g. 'it is raining', 'snow is white', etc.). In English, propositions usually follow folk psychological attitudes by a "that clause" (e.g. "Jane believes that
it is raining"). In philosophy of mind
, mental states are often taken to primarily consist in propositional attitudes. The propositions are usually said to be the "mental content" of the attitude. For example, if Jane has a mental state of believing that it is raining, her mental content is the proposition 'it is raining'. Furthermore, since such mental states are about
something (namely propositions), they are said to be intentional
mental states. Philosophical debates surrounding propositions as they relate to propositional attitudes have also recently centered on whether they are internal or external to the agent or whether they are mind-dependent or mind-independent entities (see the entry on internalism and externalism
in philosophy of mind).
Treatment in logic
As noted above, in Aristotelian logic
a proposition is a particular kind of sentence, one which affirms or denies a predicate
of a subject
. Aristotelian propositions take forms like "All men are mortal" and "Socrates is a man."
In mathematical logic, propositions, also called "propositional formulas" or "statement forms", are statements that do not contain quantifiers. They are composed of well-formed formulas consisting entirely of atomic formulas, the five logical connective, and symbols of grouping. Propositional logic is one of the few areas of mathematics that is totally solved, in the sense that it has been proven internally consistent, every theorem is true, and every true statement can be proved. (From this fact, and Gödel's Theorem, it is easy to see that propositional logic is not sufficient to construct the set of integers.) The most common extension of propositional logic is called predicate logic, which adds variables and quantifiers.
Objections to propositions
A number of philosophers and linguists claim that the philosophical definition of a proposition is too vague to be useful. For them, it is just a misleading concept that should be removed from philosophy and semantics
. W.V. Quine
maintained that the indeterminacy of translation prevented any meaningful discussion of propositions, and that they should be discarded in favor of sentences | http://www.reference.com/browse/synonymity | 13 |
17 | When an action is performed on an object, the object must send a message to the operating system (OS), letting the OS know what happened. The OS must then decide what to do, whether to respond to the message or send the message to another object. Obviously, for a message to accomplish its purpose, it must carry some information. Because there are different types of objects and there are various types of actions that can be performed on them, there are also various types of messages.
Although events primarily have to do with computer programming, in our lessons, we will know write code. Instead, we will use other friendly means that Microsoft Access provides to deal with events.
To access the events of an object, display the form or report in Design View. In the Property Sheet, click the Event tab. This would display the names of the events associated with whatever is selected on the form or report:
The names of most events start with On, which means, "at the time this is done". Because most, if not all, events have to do with time, an event is said to be fired. That is, the object fires an event at the time something happens. When an event is fired, if necessary, it gathers the necessary message to be transmitted where appropriate.
Again, remember that most events have to do with time (they are occurrences). In some cases, something must be done before the actual action is applied. For this reason, the names of some events start with Before. When something must be taken care of after the action has applied, the event that must be used to implement the desired behavior starts with After.
Some events are general and they are shared by most objects. For example, almost all objects can be clicked. Some events are based on a category objects, such as only objects that can receive text. Some other events are very restricted because their object needs particular functionality.
As mentioned already, some events are shared by almost all objects.
Probably the most common event fires when an object is clicked. The event is called On Click. This event doesn't carry any information other than letting the target know that the object has been clicked. This event doesn't identify what mouse button was used or anything else. Therefore, use this event for a normal click action.
Another common event of Windows controls is fired when an object is double-clicked. This event is represented as On Dbl Click.
Some controls must be clicked before being used. A user can also press Tab a few times from one or other controls to move focus on a particular control. In both cases, when a control receives focus, it fires an event named On Got Focus:
Text-based controls are controls that a user can click to type text. Those controls are the text box and the combo box. When such a control is clicked, whether it already contains text or not, it fires an event named On Enter. Like On Got Focus, the On Enter event indicates that the control has received focus:
After using a control, a user can press Tab. In this case, the focus would move from the current object to the next control in the tab sequence. The control that looses focus fires an event named On Lost Focus. If the control is text-based, the control fires the On Exit event.
The mouse and the keyboard are the most regularly used objects. In fact, some applications can be completely used with the mouse only. This makes this object particularly important. In Microsoft Access, the mouse is responsible for at least three events:
If the user positions the mouse on top of a control but doesn't click, the control fires an event named On Mouse Move. Remember that this event fires when the mouse passes over an object, whether the user is doing anything on the object or not.
When the user positions the mouse on an object and presses a (mouse button), the object fires the On Mouse Down. To make its action effective, the message of this event holds the following pieces of information:
If a user had pressed a mouse button, when she releases the (mouse) button, the control fires an event named On Mouse Up. The message of this event carries the same types of information as the On Mouse Down event.
There are various ways a user uses the keyboard. For example, a user can press a key on a control:
The user can press Tab to move focus from one control to another. A user can also click a text-based control and start typing. Either way when the user presses a key, the control that has focus fires an On Key Down event. The message of this event carries two pieces of information:
After pressing the key, when the user releases or depresses it, the control fires an event named On Key Up. The message of this event carries the same types of information as the On Key Down event.
When the user presses a key, if you are interested only on the key that was pressed and not on any combination of keys, use the On Key Press event. The message of this event carries only one piece of information, which is the ASCII code of the key.
To use a form, the user muse open it, either from the Navigation Pane or from another object you provide them. When the form is being opened, it fires an event named On Open. As the form is opening, it must occupy memory. As this is happening, the form fires an event named On Load.
To make itself known to the operating system and to other applications on the same computer, the form must draw its border. When this is being done, the form fires the On Resize event.
After the form has had the size it needs, the operating system must activate it. If the form is being opened as the first object, it gets positioned in the interface body of Microsoft Access. If the form was already opened and there are other forms (or reports and/or tables), if the user wants to bring it to the front, she must click either its title bar or an area of the form. When this is done, the operating system must paint its title bar with a bright color. Either case, when a form comes to the front of other windows, it fires an event named On Activate.
Once a form has been loaded is currently the active form, the user can use it. After using the form the user can close (the user can either use the system close button or you must provide other means of closing the form). As this starts, the form must lose focus. If the form was the only object opened in Microsoft Access, the body of the application is emptied. If there are other objects, the form would be closed and another object would become active. As this is done, the form fires the On Deactivate event.
When the form is being closed, it must be removed from memory to release the resources it was using (so that those resources can be used by other applications). While this is being done, the form fires the On Unload event.
Once the form has been removed from memory, it (the form) fires an event called On Close.
As you know already, to use a text box or a combo box, the user can click the control and start typing. If the control already contained some text, the user can edit it using the Space, the Backspace, the Delete, and the other letter keys. When the text is being entered or edited, the control fires the On Change event.
A combo box is a control that holds a list of items. To use it, the user can click the arrow of the control to display the list and select an item. Some versions of the combo box allow a user to click the text box part of the control and start typing. The control would then try to find an item that matches what the user is typing or has typed. Sometimes, after the user has finished typing and press Enter or Tab to move focus, Microsoft Access (the database engine) may not find a match and would display an error. This means that the text the user typed did not match any of the items of the combo box. In this case, the control would fire an event named On Not List. You can use this event to friendly display a message to the user and to take an appropriate action.
The web browser has many events appropriate for its functionality:
We already know that you can submit the path of a file or a URL to it. When a file path or a URL is given to a web browser, before it processes it, the control fires an event named On Before Navigate. If there is no problem in this event, the control shows the file or the web page. When the control has finished displaying the document, the web browser fires the On Document Complete event. If there is a change on the document, the control fires an On Progress Change event.
When a web browser has receives a file path or a URL, it makes an attempt to show that file or the web page. If it encounters a problem, it fires an On Navigation Error event.
At any time, and if you allow it, the user can change the document the control is displaying. When a new document must be displayed, the control fires an On Updated event.
Because Microsoft Access is a database application, it provides some event that are particular to records and their fields on a form or report.
To create a new record, the user must move to an empty record on a form. The user can click a control such as a text box and start typing. When this happens, the form fires an event named Before Insert.
If a record exists already, the user can open or access, click one of its fields and start typing or editing. When at least one value in the record has been changed, the form fires the On Dirty event.
After a record has been changed and submitted to the database, the form would fire an event named Before Update.
When a new record has been created, it must be submited to the database. When this is done, the form fires an After Insert event. After an existing record has been modified, the change must be submitted to the database. In this case, the form fires the After Update event.
If a table contains more than one record, after the user has opened its corresponding form, the user can navigate from one record to another. When the user moves from record to record, the form fires an event named On Current.
We know that, to delete a record, a user can click the record and press Delete. This would display a warning message. Before that message comes up, the form fires the Before Del Confirm event. After the user has clicked one of the buttons on the message box, the form fires an After Del Confirm event.
If the user decides to delete a record, before the record is actually deleted, the form fired an On Delete event.
A macro is an (automatic) action that must be performed on an object of a database. An example would consist of saving something when a key is pressed on the keyboard. Another example would consist of printing something when an object is clicked. Microsoft Access provides an easy and visual mechanism to create and manage macros.
To create a macro in Microsoft Access, you can use an intuitive dialog box that allows you to select the action to be performed and the options the action needs. In reality, when you create a macro, Microsoft Access creates a type of script that contains names, expressions, and operations for the action, sparing the details. Still, if you know what is necessary for the macro, you can "manually" create it.
To create a macro, on the Ribbon, click Create. In the Marros & Code section, click the Macro button . Two windows would display and they are separated by a split bar:
To give more room to one of the window, position the mouse between them, click and drag in the desired direction.
The left window presents a tab or a title bar labeled Macro1. By default, that window displays a combo box.
The right window displays buttons with +. This means that they are nodes. To expand a node, click its + button. When you do, the node would display its items.
The Program Flow node allows you to create a condition:
The Actions node holds most of the actions you will create for your macros. If if expand it, you will see that it organizes its actions in categories, each represented by a node:
To access the actual action you want, expand its node. This would display the actions in that category:
To create a macro from the left window, click the arrow of the combo box to display the avilable actions:
If you see the action you want, you can click it. The left window would display the objects (controls) needed for the options of the action you selected. The objects in that window depend on the action you selected.
To create an action using the right window, expand the node(s). Many names of actions are explicit or can be infered logically. Otherwise, you can click an action. The bottom section would show a description of the action:
If you see the action you want, click and drag it to the left window.
In both cases, if you selected an action you don't want anymore, you can click the Delete button .
To actions its action(s), a macro may need some additional information. This information is referred to as the argument of a macro. The argument can be made of one or more values. The argument(s) depend(s) on the (type of) macro. Some macros take 0 argument while some others take 1, 2 or more arguments.
If you select an action that doesn't take an argument, its name would display in the top section and nothing else:
If you select an action that needs one argument, its name would display followed by a box for the corresponding argument:
If you select an action that needs more than one argument, it would appear, followed by a box for each argument:
An argument is said to be required if it must always be provided, otherwise the action cannot be performed. If you select a macro that takes a required argument, an empty text field would appear and you must type the necessary values:
An argument is referred to as optional if it can be omitted, in which case the macro would use a default value. Normally, when you are creating a macro, its corresponding box(es) would display the default value(s). If you select an action that takes one argument and the argument is optional, its corresponding arguments fields would display the default value:
When an action takes more than one argument, some arguments can be required while some others are optional. The person (Microsoft) who created the macro also decided what arguments would be required and what arguments would be optional. The macro creator also decided about the order of the arguments, which one(s) would appear first and which one(s) would appear last.
If you select a macro that takes more than one argument that are a combination of required and optional arguments, for each argument that is optional, the default value would appear in its placeholder of its corresponding boxes.
After creating a macro, you can use it. This is usually done by assigning it to an event of a form, a report or a control. For example, you can first create a button that would be used to access a section of a page break. To assign a macro to an object, access the Property Sheet for the object and access the Event or the All tab. You can type the name of the macro in the event's field:
Instead of first creating a macro before assigning it to command button, as another technique, in the Design View of the form, you can right-click the object and click Build Events. In the Choose Builder dialog box, you can click Macro Builder and click OK. The new macro would be automatically assigned to the control.
In Lesson 15, we saw how to create a page break. To implement it in the Action combo box, select GoToPage. In the Page Number box, enter the desired number of the section, and close the macro. You would be asked to save it. | http://functionx.com/access/Lesson17.htm | 13 |
22 | For many students, Assumption questions are the most difficult type of Critical Reasoning problem. An assumption is simply an unstated premise of the argument; that is, an integral component of the argument that the author takes for granted and leaves unsaid. In our daily lives we make thousands of assumptions, but they make sense because they have context and we have experience with the way the world works.
Think for a moment about the many assumptions required during the simple act of ordering a meal at a restaurant. You assume that: the prices on the menu are correct; the items on the menu are available; the description of the food is reasonably accurate; the waiter will understand what you say when you order; the food will not sicken or kill you; the restaurant will accept your payment, et cetera. In an GMAT question, you are faced with the difficult task of figuring out the author’s mindset and determining what assumption he or she made when formulating the argument. This task is unlike any other on the GMAT.
Because an assumption is an integral component of the author’s argument, a piece that must be true in order for the conclusion to be true, assumptions are necessary for the conclusion. Hence, the answer you select as correct must contain a statement that the author relies upon and is fully committed to in the argument. Think of an assumption as the foundation of the argument, a statement that the premises and conclusion rest upon. If an answer choice contains a statement that the author might only think could be true, or if the statement contains additional information that the author is not committed to, then the answer is incorrect. In many respects, an assumption can be considered a minimalist answer. Because the statement must be something the author believed when forming the argument, assumption answer choices cannot contain extraneous information. For example, let us say that an argument requires the assumption “all dogs are intelligent.” The correct answer could be that statement, or even a subset statement such as “all black dogs are intelligent” or “all large dogs are intelligent” (black dogs and large dogs being subsets of the overall group of dogs, of course). But, additional information would rule out the answer, as in the following case: “All dogs and cats are intelligent.” The additional information about cats is not part of the author’s assumption, and would make the answer choice incorrect.
Because assumptions are described as what must be true in order for the conclusion to be true, some students ask about the difference between Must Be True question answers and Assumption question answers. The difference is one that can be described as before versus after: Assumption answers contain statements that were used to make the conclusion; Must Be True answers contain statements that follow from the argument made in the stimulus. In both cases, however, there is a stringent requirement that must be met: Must Be True answers must be proven by the information in the stimulus; Assumption answers contain statements the author must believe in order for the conclusion to be valid.
Question stem examples:
“The argument in the passage depends on which of the following assumptions?”
“The argument above assumes that”
“The conclusion above is based on which of the following assumptions?”
“Which of the following is an assumption made in drawing the conclusion above?”
“The conclusion of the argument above cannot be true unless which of the following is true” | http://www.beatthegmat.com/mba/2009/09/24/assumption-questions-in-critical-reasoning | 13 |
29 | So far all the programs we've written in this book have had no memory of the past history of the computation. We invoke a function with certain arguments, and we get back a value that depends only on those arguments. Compare this with the operation of Scheme itself:
> (foo 3) ERROR: FOO HAS NO VALUE > (define (foo x) (word x x)) > (foo 3) 33
Scheme remembers that you have defined
foo, so its response to
the very same expression is different the second time. Scheme maintains a
record of certain results of its past interaction with you; in particular,
Scheme remembers the global variables that you have defined. This record is
called its state.
Most of the programs that people use routinely are full of state; your text editor, for example, remembers all the characters in your file. In this chapter you will learn how to write programs with state.
The Indianapolis 500 is an annual 500-mile automobile race, famous among people who like that sort of thing. It's held at the Indianapolis Motor Speedway, a racetrack in Indianapolis, Indiana. (Indiana is better known as the home of Dan Friedman, the coauthor of some good books about Scheme.) The racetrack is 2½ miles long, so, as you might imagine, the racers have to complete 200 laps in order to finish the race. This means that someone has to keep track of how many laps each car has completed so far.
Let's write a program to help this person keep count. Each car has a
number, and the count person will invoke the procedure
lap with that
number as argument every time a car completes a lap. The procedure will
return the number of laps that that car has completed altogether:
> (lap 87) 1 > (lap 64) 1 > (lap 17) 1 > (lap 64) 2 > (lap 64) 3
(Car 64 managed to complete three laps before the other cars
completed two because the others had flat tires.) Note that we typed
(lap 64) three times and got three different answers.
Lap isn't a function! A function has to return the same answer
whenever it's invoked with the same arguments.
The point of this chapter is to show how procedures like
lap can be
written. To accomplish this, we're going to use a data structure called a
vector. (You may have seen something similar in other
programming languages under the name "array.")
A vector is, in effect, a row of boxes into which values can be put. Each vector has a fixed number of boxes; when you create a vector, you have to say how many boxes you want. Once a vector is created, there are two things you can do with it: You can put a new value into a box (replacing any old value that might have been there), or you can examine the value in a box. The boxes are numbered, starting with zero.
> (define v (make-vector 5)) > (vector-set! v 0 'shoe) > (vector-set! v 3 'bread) > (vector-set! v 2 '(savoy truffle)) > (vector-ref v 3) BREAD
There are several details to note here. When we invoke
give it one argument, the number of boxes we want the vector to have. (In
this example, there are five boxes, numbered 0 through 4. There is no
box 5.) When we create the vector, there is nothing in any of the
We put things in boxes using the
vector-set! procedure. The
exclamation point in its name, indicates that this is a mutator—a procedure that changes the value of some previously
created data structure. The exclamation point is pronounced "bang," as in
"vector set bang." (Scheme actually has several such mutators, including
mutators for lists, but this is the only one we'll use in this book.
A procedure that modifies its argument is also called destructive.) The arguments to
vector-set! are the vector, the
number of the box (the index), and the desired new value. Like
vector-set! returns an unspecified value.
We examine the contents of a box using
vector-ref, which takes two
arguments, the vector and an index.
Vector-ref is similar to
list-ref, except that it operates on vectors instead of lists.
We can change the contents of a box that already has something in it.
> (vector-set! v 3 'jewel) > (vector-ref v 3) JEWEL
The old value of box 3,
bread, is no longer there. It's
been replaced by the new value.
> (vector-set! v 1 741) > (vector-set! v 4 #t) > v #(SHOE 741 (SAVOY TRUFFLE) JEWEL #T)
Once the vector is completely full, we can print its value. Scheme
prints vectors in a format like that of lists, except that there is a number
#) before the open parenthesis. If you ever have need for a
constant vector (one that you're not going to mutate), you can quote it
using the same notation:
> (vector-ref '#(a b c d) 2) C
To implement our
lap procedure, we'll keep its state information, the
lap counts, in a vector. We'll use the car number as the index into the
vector. It's not enough to create the vector; we have to make sure that
each box has a zero as its initial value.
(define *lap-vector* (make-vector 100)) (define (initialize-lap-vector index) (if (< index 0) 'done (begin (vector-set! *lap-vector* index 0) (initialize-lap-vector (- index 1))))) > (initialize-lap-vector 99) DONE
We've created a global variable whose value is the vector. We used a recursive procedure to put a zero into each box of the vector. Note that the vector is of length 100, but its largest index is 99. Also, the base case of the recursion is that the index is less than zero, not equal to zero as in many earlier examples. That's because zero is a valid index.
Now that we have the vector, we can write
(define (lap car-number) (vector-set! *lap-vector* car-number (+ (vector-ref *lap-vector* car-number) 1)) (vector-ref *lap-vector* car-number))
Remember that a procedure body can include more than one expression. When the procedure is invoked, the expressions will be evaluated in order. The value returned by the procedure is the value of the last expression (in this case, the second one).
Lap has both a return value and a side effect. The job of the first
expression is to carry out that side effect, that is, to add 1 to the lap
count for the specified car. The second expression looks at the value we
just put in a box to determine the return value.
We remarked earlier that
lap isn't a function because invoking it
twice with the same argument doesn't return the same value both
It's not a coincidence that
lap also violates functional programming
by maintaining state information. Any procedure whose return value is not a
function of its arguments (that is, whose return value is not always the
same for any particular arguments) must depend on knowledge of what has
happened in the past. After all, computers don't pull results out of the
air; if the result of a computation doesn't depend entirely on the arguments
we give, then it must depend on some other information available to the
Suppose somebody asks you, "Car 54 has just completed a lap; how many has it completed in all?" You can't answer that question with only the information in the question itself; you have to remember earlier events in the race. By contrast, if someone asks you, "What's the plural of `book'?" what has happened in the past doesn't matter at all.
The connection between non-functional procedures and state also applies to
non-functional Scheme primitives. The
read procedure, for example,
returns different results when you invoke it repeatedly with the same
argument because it remembers how far it's gotten in the file. That's why
the argument is a port instead of a file name: A port is an abstract data
type that includes, among other things, this piece of state. (If you're
reading from the keyboard, the state is in the person doing the typing.)
A more surprising example is the
random procedure that you met in
Random isn't a function because it doesn't always
return the same value when called with the same argument. How does
random compute its result? Some versions of
random compute a number
that's based on the current time (in tiny units like milliseconds so you
don't get the same answer from two calls in quick succession). How does
your computer know the time? Every so often some procedure (or some
hardware device) adds 1 to a remembered value, the number of milliseconds
since midnight. That's state, and
random relies on it.
The most commonly used algorithm for random numbers is a little trickier;
each time you invoke
random, the result is a function of the
result from the last time you invoked it. (The procedure is pretty
complicated; typically the old number is multiplied by some large, carefully
chosen constant, and only the middle digits of the product are kept.) Each
time you invoke
random, the returned value is stashed away somehow so
that the next invocation can remember it. That's state too.
Just because a procedure remembers something doesn't necessarily make it stateful. Every procedure remembers the arguments with which it was invoked, while it's running. Otherwise the arguments wouldn't be able to affect the computation. A procedure whose result depends only on its arguments (the ones used in the current invocation) is functional. The procedure is non-functional if it depends on something outside of its current arguments. It's that sort of "long-term" memory that we consider to be state.
In particular, a procedure that uses
let isn't stateful merely because
the body of the
let remembers the values of the variables created by the
let returns a value, the variables that it created no
longer exist. You couldn't use
let, for example, to carry out the
kind of remembering that
Let doesn't remember a
value between invocations, just during a single invocation.
One of the advantages of the vector data structure is that it allows elements to be rearranged. As an example, we'll create and shuffle a deck of cards.
We'll start with a procedure
card-list that returns a list of all the
cards, in standard order:
(define (card-list) (reduce append (map (lambda (suit) (map (lambda (rank) (word suit rank)) '(a 2 3 4 5 6 7 8 9 10 j q k))) '(h s d c)))) > (card-list) (HA H2 H3 H4 H5 H6 H7 H8 H9 H10 HJ HQ HK SA S2 S3 S4 S5 S6 S7 S8 S9 S10 SJ SQ SK DA D2 D3 D4 D5 D6 D7 D8 D9 D10 DJ DQ DK CA C2 C3 C4 C5 C6 C7 C8 C9 C10 CJ CQ CK)
card-list, we need
reduce append because the result
from the outer invocation of
map is a list of lists:
((HA H2 …) (SA …) …).
Each time we want a new deck of cards, we start with this list of 52 cards,
copy the list into a vector, and shuffle that vector. We'll use the Scheme
, which takes a list as argument and
returns a vector of the same length, with the boxes initialized to the
corresponding elements of the list. (There is also a procedure
that does the reverse. The characters
these function names are meant to look like an arrow
(→); this is a Scheme convention for functions
that convert information from one data type to another.)
(define (make-deck) (shuffle! (list->vector (card-list)) 51)) (define (shuffle! deck index) (if (< index 0) deck (begin (vector-swap! deck index (random (+ index 1))) (shuffle! deck (- index 1))))) (define (vector-swap! vector index1 index2) (let ((temp (vector-ref vector index1))) (vector-set! vector index1 (vector-ref vector index2)) (vector-set! vector index2 temp)))
Now, each time we call
make-deck, we get a randomly shuffled
vector of cards:
> (make-deck) #(C4 SA C7 DA S4 D9 SQ H4 C10 D5 H9 S10 D6 S9 CA C9 S2 H7 S5 H6 D7 HK S7 C3 C2 C6 HJ SK CQ CJ D4 SJ D8 S8 HA C5 DK D3 HQ D10 H8 DJ C8 H2 H5 H3 CK S3 DQ S6 D2 H10) > (make-deck) #(CQ H7 D10 D5 S8 C7 H10 SQ H4 H3 D8 C9 S7 SK DK S6 DA D4 C6 HQ D6 S2 H5 CA H2 HJ CK D7 H6 HA CJ C4 SJ HK SA C2 D2 S4 DQ S5 C10 H9 D9 C5 D3 DJ C3 S9 S3 C8 S10 H8)
How does the shuffling algorithm work? Conceptually it's not complicated, but there are some implementation details that make the actual procedures a little tricky. The general idea is this: We want all the cards shuffled into a random order. So we choose any card at random, and make it the first card. We're then left with a one-card-smaller deck to shuffle, and we do that by recursion. (This algorithm is similar to selection sort from Chapter 15, except that we select a random card each time instead of selecting the smallest value.)
The details that complicate this algorithm have to do with the fact that we're using a vector, in which it's easy to change the value in one particular position, but it's not easy to do what would otherwise be the most natural thing: If you had a handful of actual cards and wanted to move one of them to the front, you'd slide the other cards over to make room. There's no "sliding over" in a vector. Instead we use a trick; we happen to have an empty slot, the one from which we removed the randomly chosen card, so instead of moving several cards, we just move the one card that was originally at the front into that slot. In other words, we exchange two cards, the randomly chosen one and the one that used to be in front.
Second, there's nothing comparable to
cdr to provide a
one-card-smaller vector to the recursive invocation. Instead, we must use
the entire vector and also provide an additional
index argument, a
number that keeps track of how many cards remain to be shuffled. It's
simplest if each recursive invocation is responsible for the range of cards
0 to position
index of the vector, and therefore
the program actually moves each randomly selected card to the end of
the remaining portion of the deck.
If you want to make a vector with only a few boxes, and you know in advance
what values you want in those boxes, you can use the constructor
list, it takes any number of arguments and
returns a vector containing those arguments as elements:
> (define beatles (vector 'john 'paul 'george 'pete)) > (vector-set! beatles 3 'ringo) > beatles #(JOHN PAUL GEORGE RINGO)
vector-length takes a vector as argument and returns the
number of boxes in the vector.
> (vector-length beatles) 4
equal?, which we've used with words and lists, also
accepts vectors as arguments. Two vectors are equal if they are the same
size and all their corresponding elements are equal. (A list and a vector
are never equal, even if their elements are equal.)
Finally, the predicate
vector? takes anything as argument and returns
#t if and only if its argument is a vector.
Here are two procedures that you've seen earlier in this chapter, which do something to each element of a vector:
(define (initialize-lap-vector index) (if (< index 0) 'done (begin (vector-set! *lap-vector* index 0) (initialize-lap-vector (- index 1))))) (define (shuffle! deck index) (if (< index 0) deck (begin (vector-swap! deck index (random (+ index 1))) (shuffle! deck (- index 1)))))
These procedures have a similar structure, like the similarities we found in other recursive patterns. Both of these procedures take an index as an argument, and both have
(< index 0)
as their base case. Also, both have, as their recursive case, a
begin in which the first action does something to the vector element
selected by the current index, and the second action is a recursive call
with the index decreased by one. These procedures are initially called with
the largest possible index value.
In some cases it's more convenient to count the index upward from zero:
(define (list->vector lst) (l->v-helper (make-vector (length lst)) lst 0)) (define (l->v-helper vec lst index) (if (= index (vector-length vec)) vec (begin (vector-set! vec index (car lst)) (l->v-helper vec (cdr lst) (+ index 1)))))
Since lists are naturally processed from left to right (using
cdr), this program must process the vector from left to
Since we introduced vectors to provide mutability, you may have the impression that mutability is the main difference between vectors and lists. Actually, lists are mutable too, although the issues are more complicated; that's why we haven't used list mutation in this book.
The most important difference between lists and vectors is that each kind of aggregate lends itself to a different style of programming, because some operations are faster than others in each. List programming is characterized by two operations: dividing a list into its first element and all the rest, and sticking one new element onto the front of a list. Vector programming is characterized by selecting elements in any order, from a collection whose size is set permanently when the vector is created.
To make these rather vague descriptions more concrete, here are two procedures, one of which squares every number in a list, and the other of which squares every number in a vector:
(define (list-square numbers) (if (null? numbers) '() (cons (square (car numbers)) (list-square (cdr numbers))))) (define (vector-square numbers) (vec-sq-helper (make-vector (vector-length numbers)) numbers (- (vector-length numbers) 1))) (define (vec-sq-helper new old index) (if (< index 0) new (begin (vector-set! new index (square (vector-ref old index))) (vec-sq-helper new old (- index 1)))))
In the list version, the intermediate stages of the algorithm deal with lists that are smaller than the original argument. Each recursive invocation "strips off" one element of its argument and "glues on" one extra element in its return value. In the vector version, the returned vector is created, at full size, as the first step in the algorithm; its component parts are filled in as the program proceeds.
This example can plausibly be done with either vectors or lists, so we've used it to compare the two techniques. But some algorithms fit most naturally with one kind of aggregate and would be awkward and slow using the other kind. The swapping of pairs of elements in the shuffling algorithm would be much harder using lists, while mergesort would be harder using vectors.
The best way to understand these differences in style is to know the
operations that are most efficient for each kind of aggregate. In each
case, there are certain operations that can be done in one small unit of
time, regardless of the number of elements in the aggregate, while other
operations take more time for more elements. The constant time
operations for lists are
the ones for vectors are
vector-length. And if you reread the
squaring programs, you'll find that these are precisely the operations
We might have used
list-ref in the list version, but we didn't, and
Scheme programmers usually don't, because we know that it would be slower.
Similarly, we could implement something like
cdr for vectors, but that
would be slow, too, since it would have to make a one-smaller vector and
copy the elements one at a time. There are two possible morals to this
story, and they're both true: First, programmers invent and learn the
algorithms that make sense for whatever data structure is available. Thus
we have well-known programming patterns, such as the
appropriate for lists, and different patterns appropriate for vectors.
Second, programmers choose which data structure to use depending on what
algorithms they need. If you want to shuffle cards, use a vector, but if
you want to split the deck into a bunch of variable-size piles, lists might
be more appropriate. In general, vectors are good at selecting elements in
arbitrary order from a fixed-size collection; lists are good only at
selecting elements strictly from left to right, but they can vary in size.
In this book, despite what we're saying here about efficiency, we've generally tried to present algorithms in the way that's easiest to understand, even when we know that there's a faster way. For example, we've shown several recursive procedures in which the base case test was
(= (count sent) 1)
If we were writing the program for practical use, rather than for a book, we would have written
(empty? (butfirst sent))
because we know that
butfirst are both
constant time operations (because for sentences they're implemented as
count takes a long time for large
sentences. But the version using
count makes the intent
Effects, sequence, and state are three sides of the same coin.
In Chapter 20 we explained the connection between effect (printing something on the screen) and sequence: It matters what you print first. We also noted that there's no benefit to a sequence of expressions unless those expressions produce an effect, since the values returned by all but the last expression are discarded.
In this chapter we've seen another connection. The way our vector programs
maintain state information is by carrying out effects, namely,
vector-set! invocations. Actually, every effect changes some kind
of state; if not in Scheme's memory, then on the computer screen or in a
The final connection to be made is between state and sequence. Once a
program maintains state, it matters whether some computation is carried out
before or after another computation that changes the state. The example at
the beginning of this chapter in which an expression had different results
before and after defining a variable illustrates this point. As another
example, if we evaluate
(lap 1) 200 times and
(lap 2) 200 times,
the program's determination of the winner of the race depends on whether the
last evaluation of
(lap 1) comes before or after the last invocation
Because these three ideas are so closely connected, the names sequential programming (emphasizing sequence) and imperative programming (emphasizing effect) are both used to refer to a style of programming that uses all three. This style is in contrast with functional programming, which, as you know, uses none of them.
Although functional and sequential programming are, in a sense, opposites, it's perfectly possible to use both styles within one program, as we pointed out in the tic-tac-toe program of Chapter 20. We'll show more such hybrid programs in the following chapters.
Don't forget that the first element of a vector is number zero, and there is no element whose index number is equal to the length of the vector. (Although these points are equally true for lists, it doesn't often matter, because we rarely select the elements of a list by number.) In particular, in a vector recursion, if zero is the base case, then there's probably still one element left to process.
Try the following experiment:
> (define dessert (vector 'chocolate 'sundae)) > (define two-desserts (list dessert dessert)) > (vector-set! (car two-desserts) 1 'shake) > two-desserts (#(CHOCOLATE SHAKE) #(CHOCOLATE SHAKE))
You might have expected that after asking to change one word in
two-desserts, the result would be
(#(CHOCOLATE SHAKE) #(CHOCOLATE SUNDAE))
However, because of the way we created
two-desserts, both of
its elements are the same vector. If you think of a list as a
collection of things, it's strange to imagine the very same thing in two
different places, but that's the situation. If you want to have two separate
vectors that happen to have the same values in their elements, but are
individually mutable, you'd have to say
> (define two-desserts (list (vector 'chocolate 'sundae) (vector 'chocolate 'sundae))) > (vector-set! (car two-desserts) 1 'shake) > two-desserts (#(CHOCOLATE SHAKE) #(CHOCOLATE SUNDAE))
Each invocation of
make-vector creates a
new, independent vector.
Do not solve any of the following exercises by converting a vector to a list, using list procedures, and then converting the result back to a vector.
23.1 Write a procedure
sum-vector that takes a vector full of
numbers as its argument and returns the sum of all the numbers:
> (sum-vector '#(6 7 8)) 21
23.2 Some versions of Scheme provide a procedure
vector-fill! that takes a
vector and anything as its two arguments. It replaces every element of the
vector with the second argument, like this:
> (define vec (vector 'one 'two 'three 'four)) > vec #(one two three four) > (vector-fill! vec 'yeah) > vec #(yeah yeah yeah yeah)
vector-fill!. (It doesn't matter what value it
23.3 Write a function
vector-append that works just like regular
append, but for vectors:
> (vector-append '#(not a) '#(second time)) #(not a second time)
23.5 Write a procedure
vector-map that takes two arguments, a
function and a vector, and returns a new vector in which each box contains
the result of applying the function to the corresponding element of the
23.6 Write a procedure
vector-map! that takes two arguments, a
function and a vector, and modifies the argument vector by replacing each
element with the result of applying the function to that element. Your
procedure should return the same vector.
23.7 Could you write
vector-filter? How about
Explain the issues involved.
23.8 Modify the
lap procedure to print "Car 34 wins!" when car 34
completes its 200th lap. (A harder but more correct modification is
to print the message only if no other car has completed 200 laps.)
Write a procedure
leader that says which car is in the lead
23.10 Why doesn't this solution to Exercise 23.9 work?
(define (leader) (leader-helper 0 1)) (define (leader-helper leader index) (cond ((= index 100) leader) ((> (lap index) (lap leader)) (leader-helper index (+ index 1))) (else (leader-helper leader (+ index 1)))))
23.11 In some restaurants, the servers use computer terminals to keep track of what each table has ordered. Every time you order more food, the server enters your order into the computer. When you're ready for the check, the computer prints your bill.
You're going to write two procedures,
Order takes a table number and an item as arguments and
adds the cost of that item to that table's bill.
Bill takes a table
number as its argument, returns the amount owed by that table, and resets
the table for the next customers. (Your
order procedure can examine a
*menu* to find the price of each item.)
> (order 3 'potstickers) > (order 3 'wor-won-ton) > (order 5 'egg-rolls) > (order 3 'shin-shin-special-prawns) > (bill 3) 13.85 > (bill 5) 2.75
23.12 Rewrite selection sort (from Chapter 15) to sort a vector. This can
be done in a way similar to the procedure for shuffling a deck: Find
the smallest element of the vector and exchange it (using
vector-swap!) with the value in the first box. Then find the smallest
element not including the first box, and exchange that with the second box,
and so on. For example, suppose we have a vector of numbers:
#(23 4 18 7 95 60)
Your program should transform the vector through these intermediate stages:
#(4 23 18 7 95 60) ; exchange 4 with 23 #(4 7 18 23 95 60) ; exchange 7 with 23 #(4 7 18 23 95 60) ; exchange 18 with itself #(4 7 18 23 95 60) ; exchange 23 with itself #(4 7 18 23 60 95) ; exchange 60 with 95
23.13 Why doesn't this work?
(define (vector-swap! vector index1 index2) (vector-set! vector index1 (vector-ref vector index2)) (vector-set! vector index2 (vector-ref vector index1)))
23.14 Implement a two-dimensional version of vectors. (We'll call one of these structures a matrix.) The implementation will use a vector of vectors. For example, a three-by-five matrix will be a three-element vector, in which each of the elements is a five-element vector. Here's how it should work:
> (define m (make-matrix 3 5)) > (matrix-set! m 2 1 '(her majesty)) > (matrix-ref m 2 1) (HER MAJESTY)
23.15 Generalize Exercise 23.14 by implementing an array structure that can have any number of dimensions. Instead of taking two numbers as index arguments, as the matrix procedures do, the array procedures will take one argument, a list of numbers. The number of numbers is the number of dimensions, and it will be constant for any particular array. For example, here is a three-dimensional array (4×5×6):
> (define a1 (make-array '(4 5 6))) > (array-set! a1 '(3 2 3) '(the end))
23.16 We want to reimplement sentences as vectors instead of lists.
(a) Write versions of
butlast that use vectors. Your
selectors need only work for sentences, not for words.
> (sentence 'a 'b 'c) #(A B C) > (butfirst (sentence 'a 'b 'c)) #(B C)
(You don't have to make these procedures work on lists as well as vectors!)
(b) Does the following program still work with the new implementation of sentences? If not, fix the program.
(define (praise stuff) (sentence stuff '(is good)))
(c) Does the following program still work with the new implementation of sentences? If not, fix the program.
(define (praise stuff) (sentence stuff 'rules!))
(d) Does the following program still work with the new implementation of sentences? If not, fix the program. If so, is there some optional rewriting that would improve its performance?
(define (item n sent) (if (= n 1) (first sent) (item (- n 1) (butfirst sent))))
(e) Does the following program still work with the new implementation of sentences? If not, fix the program. If so, is there some optional rewriting that would improve its performance?
(define (every fn sent) (if (empty? sent) sent (sentence (fn (first sent)) (every fn (butfirst sent)))))
(f) In what ways does using vectors to implement sentences affect the speed of the selectors and constructor? Why do you think we chose to use lists?
In some versions of Scheme,
make-vector can take an
optional argument specifying an initial value to put in every box. In those
versions, we could just say
(define *lap-vector* (make-vector 100 0))
without having to use the initialization procedure.
That's what we mean by "non-functional," not that it doesn't work!
We could get around this problem in a different way:
(define (card-list) (every (lambda (suit) (every (lambda (rank) (word suit rank)) '(a 2 3 4 5 6 7 8 9 10 j q k))) '(h s d c)))
In this version, we're taking advantage of the fact that our
sentence data type was defined in a way that prevents the creation of
sublists. A sentence of cards is a good representation for the deck.
However, with this approach we are mixing up the list and sentence data
types, because later we're going to invoke
list->vector with this deck
of cards as its argument. If we use sentence tools such as
create the deck, then the procedure
card-list should really be called
What difference does it make? The
every version works fine, as long
as sentences are implemented as lists, so that
list->vector can be
applied to a sentence. But the point about abstract data types such as
sentences is to avoid making assumptions about their implementation. If
for some reason we decided to change the internal representation of
list->vector could no longer be applied to a sentence.
Strictly speaking, if we're going to use this trick, we need a separate
Of course, if you don't mind a little typing, you can avoid this whole issue
by having a quoted list of all 52 cards built into the definition of
Where did this information come from? Just take our word for it. In later courses you'll study how vectors and lists are implemented, and then there will be reasons.
For words, it turns out, the
count version is faster,
because words behave more like vectors than like lists.
… to coin a phrase.
(back to Table of Contents)
BACK chapter thread NEXT | http://www.eecs.berkeley.edu/~bh/ssch23/vectors.html | 13 |
18 | The Boston Collaborative Encyclopedia of Modern Western Theology
John Locke (1632-1704) (Julian Gotobed, 2004-2005)
John Locke's Moral Philosophy in An Essay Concerning Human Understanding (Joas Adiprasetya, 2004-2005)
John Locke (1632-1704) (Brandon Daniel Hughes, 2002-2003)
John Locke (1632-1704) (Marylu Bunting, 2000-2001)
John Locke (1632-1704) (Slavica Jakelic, 1998-1999)
Julian Gotobed, 2004
John Locke witnessed and contributed to a period of turbulent change in English history. England experienced ferment in politics, economics, religion, philosophy, literature, and science throughout the seventeenth century. At the outset of the century, the English monarch, James I (1603-25), proclaimed the Divine Right of Kings and, therefore, absolute discretion to rule as he pleased. Charles I (1625-49) adopted the same principle in his reign. He pursued policies that alienated Parliament, which represented those with property and the rising merchant class, and precipitated a Civil War (1642-46). Charles I continued to be untrustworthy even in defeat and was consequently tried and executed in 1649 by the New Model Army, which had fought for Parliament against the King, under the leadership of Oliver Cromwell. A Republic was briefly declared, but, ultimately, did not succeed. The monarchy was restored in 1660 in the person of Charles II. Charles II died in 1685 and was succeeded by his son James II. At the close of the century, Parliament asserted its authority over the monarchy in the Glorious Revolution by ejecting James II in 1688 and inviting William of Orange to become King of England. William ascended to the throne in 1689. The balance of power shifted from monarch to Parliament as the century progressed.
Church and Crown traditionally worked in concert to reinforce one another’s claims and interests in England. Critics of the State Church and the Monarchy were subject to severe punishments in the first four decades of the seventeenth century. Religious liberty was an alien concept. The State Church, however, did not escape both internal and external criticism. The Puritans sought to reconfigure the State Church in the Reformed tradition and pressed for a preaching ministry in parish churches, less ritualized liturgies, and a Presbyterian polity. Dissenters, Christians that refused to conform to the worship prescribed by the Church of England, added to the fragmentation and confusion of the period. The first Baptist congregation on English soil was planted in London in 1612. John Bunyan (1628-88) epitomized the dissenting spirit’s plea for religious liberty in his life, ministry, imprisonment, and writings, most notably in The Pilgrim’s Progress (1678). A century that began with imprisonment and torture of religious dissenters ended with the Act of Toleration in 1689 that permitted dissenters to worship freely.
In addition to Bunyan’s classic work, English literature and Christian spirituality were transformed by the appearance of two enduring publications: The Authorized Version of the Bible (1611) and The Book of Common Prayer (1662). All three publications assisted the development and standardization of the English language. The Authorized Version of the Bible enabled any person that could read to critique the teaching and practice of the State Church or any Dissenting congregation with reference to the New Testament. Theology was no longer confined to the cloisters of Oxford and Cambridge. Any congregation or interested individual could search the Scriptures to discern the Word of God for daily life. Books and pamphlets discussing theological themes proliferated. The Bible was central to theological debate in the seventeenth century.
The intellectual climate shifted dramatically in the seventeenth century. A new perspective on knowledge displaced the Aristotelian approach to philosophy and science. Philosophers in seventeenth century England increasingly saw the task of philosophy as that of testing propositions against empirical evidence and constructing conceptual frameworks based on first principles that were self-evident to human reason. Philosophers divided on whether or not human reason alone was sufficient for knowledge of God and the practice of religion.
John Locke was born into the revolutionary climate of seventeenth century England on 29 August 1632 at Wrington, a small village in Somerset, near Bristol. His father was an attorney and a modest property holder. Locke was admitted to Westminster School in 1647, subsequently elected to a studentship at Christ Church, Oxford, in 1652 and graduated BA in 1656. Locke became acquainted with the scientist Robert Boyle in 1660 the same year in which the Royal Society was founded. Locke was a polymath. In the 1660s he lectured in Greek and Moral Philosophy at Oxford, and served as secretary to Sir Walter Vane throughout his diplomatic visit to the Elector of Brandenburg (November 1665-February 1666). The Royal Society elected Locke a fellow in 1668. Locke began to write what would ultimately be published as An Essay concerning Human Understanding in 1671. He briefly served as Secretary to the Council of Trade and Foreign Plantations (1673-74). Locke returned to Oxford to study medicine and graduated as Bachelor of Medicine in 1675. The same year he traveled to France and remained there until 1679. Locke was compelled to flee England in 1683, because of his close association with the Earl of Shaftesbury. Shaftesbury attempted to exclude James the II from the throne between 1679 and 1681. There is no evidence that Locke was involved in this scheme, but his friendship with Shaftesbury placed him in a vulnerable position by virtue of association. He fled to Amsterdam in 1683 and did not return to England until 1689 on the accession of William and Mary. Locke published An Essay concerning Human Understanding, Two Treatises on Government, and A Letter concerning Toleration in 1689, and met Isaac Newton in London. Locke spent the last years of his life as a man of letters working on manuscripts for publication and maintaining a considerable correspondence. He resided with Sir Francis and Lady Masham in their country house at Oates in Essex from 1691 until his death. Locke composed The Reasonableness of Christianity (1695) and published the fourth and final edition of the Essay while living quietly in the Essex countryside. He died on 28th October 1704.
Locke lived through a tumultuous age marked by conflict in Church and State. By the end of the century there was not one Church and Doctrine, but several churches and a multitude of teachings. The Reasonableness of Christianity as Delivered in the Scriptures originated in Locke’s dissatisfaction with many of the theological systems that competed for loyalty in an age of revolution. The Reasonableness of Christianity also stemmed from his desire to find a foundation for consensus in religion that would be based upon agreement in essentials and toleration in matters of secondary importance that were merely expressions of human preference. The Reasonableness of Christianity is an apologetic work. Locke attempts to commend Christianity and make a compelling case for its veracity. He has in mind an audience that places a great deal of importance upon the exercise of human reason. Locke attempts to demonstrate that Christianity is consistent with human reason. Locke’s theological method consists of several elements that merit comment. First, he assumes that he approached the Bible as an unbiased inquirer after truth (Locke 1695: 25). Locke assumes that the architects of the theological systems he rejects have read a great deal into the New Testament that is extraneous to the Gospel and constructed correspondingly distorted systems of belief. Underlying Locke’s claim to an unbiased reading of the New Testament is a presupposition that unaided human reason has the capacity to discern what is true for all people at all times and in all places. Locke does not consider the possibility that his interpretation of the New Testament is no less subjective than the thinkers that he critiques in the pages of The Reasonableness of Christianity. Locke acknowledges that the context in which he lived, a century of revolutionary change in England marked by conflict in politics and religion, prompted him to search the Bible to discern the essence of Christianity as a basis for religious consensus. A desire for agreement predisposes Locke to settle for a minimal statement of content, the lowest common denominator, to appeal to a broad theological spectrum that stretched from Deists to Calvinists. He avoids controversial issues of theological discourse such as the deity of Christ and the Trinity where much scope for disagreement existed. Locke did not approach the Bible with an unbiased gaze.
Second, Locke’s theological method is characterized by a careful study of the Bible, especially the New Testament. The Bible was the authority that Christian thinkers of all persuasions appealed to in varying degrees as the court of appeal for their ideas. Locke acknowledges the Bible as the authoritative source for the knowledge of God. Thus Locke sets out to establish the essence of Christianity by a careful reading of the Bible, particularly the Gospels and the Acts of the Apostles. In Locke’s thought the Bible functions as a source of knowledge with widely recognized authority within Christianity and power to persuade those presently outside Christianity.
Third, Locke places a limit on the power of unaided human reason. There are some things that can be made known to human beings by revelation alone. Locke accepts that the Bible is Divinely inspired. Revelation is propositional in nature. Revelation makes known what reason cannot ascertain about God and His purposes. Revelation accomplishes what reason never can. Potentially, human reason can deduce the moral law. At the same time Locke insists that revelation is never contrary to reason. Revelation must be tested by reason. In effect, revelation is subject to human reason. In making revelation subject to human reason Locke evacuates the Gospel of any mystery. God and the Gospel in The Reasonableness of Christianity are simple and intelligible and entirely compatible with human reason. There is never any suspicion that God might subvert human reason in surprising ways.
The relationship between revelation and reason in The Reasonableness of Christianity points to a fourth dimension of Locke’s theological method. He is a synthesizer. Locke attempts to integrate revelation and reason, the natural and the supernatural in his thinking. Locke does not dispense with the supernatural. In fact, the case he makes depends upon the operation of miracles in the ministry of Jesus. Locke is not always successful in creating a totally consistent integration between the elements that he tries to hold together as in the case of revelation and reason.
Fifth, the plain meaning of the text in the context in which it was written determines the meaning of a passage of Scripture for Locke. Locke anticipates subsequent historical-critical research into the life of Jesus by focusing on the text in its historical context. His study of Jesus the Messiah wrestles with the seeming reluctance of Jesus to openly declare his Messianic identity. Locke anticipates Wrede's discussion of The Messianic Secret. The Reasonableness of Christianity is not a work of Systematic Theology. He is not interested in creating a system of theology. He wants to simplify and reduce the Gospel to its essence rather than build a complex system of interrelated components. However, Locke engages problematic issues raised by the text of the New Testament and works hard to think coherently and consistently about them. Locke considers the fate of those in Israel who lived before God sent Jesus Christ to declare His will to humankind. He also speculates about those who have never heard the Gospel. Many of the questions raided by Locke in The Reasonableness of Christianity are still engaging theologians today.
Sixth, Locke engages with alternative points of view to his own from the beginning of The Reasonableness of Christianity. Locke in The Reasonableness of Christianity does not explicitly name deism and Calvinism, but their shadows linger over the text. Locke sees himself as a Christian seeking to engage systems of thought within the orbit of Christianity and also those that have abandoned any kind of Orthodox faith. Locke the theologian operates as a public figure that speaks to the community of faith and those beyond it.
The content of The Reasonableness of Christianity begins with a consideration of what Adam lost before examining what Christ restored in His work of Redemption. Locke disagrees with those that believe all people are subject to eternal punishment because of Adam. Such an outcome is inconsistent with the goodness and justice of God. “The reason for this strange interpretation we shall perhaps find in some mistaken places in the New Testament” (Locke, 1695: 27). The witness of the New Testament is relegated to the role of a supporting act when it does not agree with the canons of human reason. He also disagrees with those that think no Redemption is necessary since the Scriptures, which are the written Word of God, were plainly given to instruct people in the way of salvation. For Locke, Adam lost immortality by his act of transgression against God. Death for all was the consequence of Adam’s sin, not guilt or necessity of sinning. A righteous God would not condemn everybody to a necessity of sinning on the basis of one person’s sin. God does not impute the sin of Adam to his posterity. Locke effectively disagrees with the doctrine of original sin and diverges from those that stand in the tradition of Augustine. Each person is entirely responsible for his or her own sin without reference to the sin of Adam or any ancestors.
According to Locke, where there is no law there is no sin (Locke, 1695: 30). Law in the sense Locke means it is a set of moral imperatives and duties. The people of Israel were blessed with the Law of Moses from God. The part of the law concerned with worship and political life is temporary, but the moral code is eternal and binding. The moral law remains in force even under the Gospel. Locke distinguishes between two kinds of law. The law of works demands perfect obedience. No allowance is made for failure no matter how small. The law of faith is the means by which God justifies a sinful man who believes. The law of faith is granted to all who believe what God requires them to believe. God requires that we believe that Jesus is the Messiah (Locke, 1695: 32). This is the one article of faith essential to Christianity.
Faith in The Reasonableness of Christianity is assent to a propositional statement. Locke advances two reasons for believing the proposition that Jesus is the Messiah. First, Jesus and the Apostles performed miracles to proclaim and convince people that He was the Messiah. A miracle is a direct supernatural intervention by God that overrules the normal operation of natural laws in the universe. Second, Jesus fulfilled Old Testament prophecies about the Messiah. Locke conceived prophecies to be predictions that come to pass.
An appropriate response to the Gospel is more than mere assent to the proposition that Jesus is the Messiah, but also necessitates action, namely, obedience to the moral law. What advantage is there for Christians if any human being can, in principle, discern God’s moral demands? Locke answers that, historically, few people in human societies have seriously pursued the demanding life of a philosopher. In any event no philosophical school has ever produced such an all-embracing body of moral truth such as the teaching of Jesus. The mass of humanity has lived in moral darkness and normally lacks the time and the ability to engage in serious inquiry into the nature of truth. The propositional truth revealed by Jesus is a comprehensive body of moral truth that is unequaled by any philosophical school. Jesus acts as a revelatory shortcut that enables those who believe in him to access a comprehensive body of truth without expending the time and effort normally needed to arrive at moral insight. Locke did not welcome Dissenting congregations encouraging members to study the Bible for themselves. Plain, unlearned people needed to be taught the truth. Moreover, Locke believed that a clear authentication of one sent by God to instruct people in moral truth by means of miracles would be a far more effective way of bringing people to an awareness of moral truth rather than by reasoning with them from general principles.
The version of Christianity commended in The Reasonableness of Christianity differed markedly from the emphases found especially in many dissenting congregations. Locke’s Christianity was devoid of enthusiasm, minimized direct contact with God, and did not resonate with the more democratic or participatory instincts of dissenting Christians.
Brooke, Christopher and Denis Mack Smith, ed. A History of England. Vol.5, The Century of Revolution: 1603-1714, by Christopher Hill. Edinburgh: Thomas Nelson, 1961.
Hargreaves-Mawdsley, W.N., Oxford in the Age of John Locke. The Centres of Civilization Series. Norman: University of Oklahoma Press, 1973.
Hill, Christopher. A Tinker and a Poor Man: John Bunyan and His Church, 1628-1688. New York: Alfred A.Knopf, 1989.
Locke, John. An Essay Concerning Human Understanding, ed. Peter H.Nidditch. Oxford: Clarendon Press, 1975.
The Reasonableness of Christianity as Delivered in the Scriptures, Edited with an Introduction, Notes, Critical Apparatus and Transcription of Related Manuscripts by John C.Higgins-Biddle. Oxford: Clarendon Press, 1999.
The Reasonableness of Christianity as Delivered in the Scriptures with A Discourse of Miracles and part of A Third Letter Concerning Toleration, ed. I.T.Ramsey. Stanford: Stanford University Press, 1958.
Writings on Religion, ed. Victor Nuovo. Oxford: Clarendon Press, 2002.
Marshall, John. John Locke: Resistance, Religion and Responsibility. Cambridge: CUP, 1994.
Nuovo, Victor, ed. Contemporary Responses to The Reasonableness of Christianity. Key Issues, ed. Andrew Pyle, no.16. Bristol: Thoemmes Press, 1997.
Joas Adiprasetya, 2004
The fact that John Locke’s opus magnum, An Essay Concerning Human Understanding, was published in four editions during its author’s life (1689, 1694, 1695 & 1700) shows at least two things. First, it demonstrates Locke’s dynamic struggle with his own thinking process; second, it gives us a clue that his philosophical work received wide responses from its author’s contemporaries.
The main purpose of the Essay, as Locke states, is “to inquire into the Original, Certainty, and Extent of humane Knowledge; together, with the Grounds and Degrees of Belief, Opinion, and Assent” (I.1.2). However, Locke is fully aware that this constructive effort could be done only by primarily “clearing Ground a little, and removing some of the Rubbish, that lies in the way to Knowledge” (“The Epistle to the Reader,” 10). By reading through his four books of the Essay, we can see how Locke makes every effort to meet his two purposes (constructive and critical). What motivates him to write the Essay, however, is his discussion with some friends in his chamber about what his friend James Tyrell called the “principles of morality and revealed religion,” which finally comes to a cul-de-sac (Fraser 1959, 9n2). Thus, what he has produced in the Essay !is in reality very remote from the original purpose established in the meeting. Nevertheless, we can still find out how Locke always keeps the issues of morality and religion in the background of this work. The purpose of this paper is, thus, to discuss one of the issues, i.e., morality, within the larger scope of his Essay.
General Overview of the Essay
The design of the Essays is as follows. In “Book I” Locke makes the case against the dominant grand theory of innatism. By innatism he means a theory that “there are in the Understanding certain innate Principles; some primary Notions … Characters, as it were stamped upon the Mind of Man; which the Soul receives in its very first Being; and brings into the World with it” (I.2.1). He starts with rejecting speculative innate principles (I.2) before practical or moral innate principles (I.3), simply because the latter are “more visible” that the first. He also puts in chapter 4 some additional considerations and proofs to support his campaign. It is in the last chapter of the first book that we can find massive arguments on morality and the notion of God.
Locke understands “principle” as a derivative of “idea”, and he focuses his “Book II” of the Essay on explaining his understanding of idea, through which he also provides a counter-theory of innatism. He maintains that ideas as the materials of knowledge come from experience, and experience takes two forms: sensation and reflection. These two are “the Fountains of Knowledge” (II.1.2). While the first deals with external sources of ideas or perceptions, the second deals with an internal operation of the mind that receives ideas from the senses. Thus, we cannot have the experience of reflection (or “internal Sense,” II.1.4) until we have the experience of external sensation. The understanding that ideas are conveyed from outside into the human mind, through senses and reflection, presupposes the image of mind as “empty Cabinet”! or “white Paper” (I.2.15; II.1.2).
Locke devotes almost all of the pages in “Book II” to a detailed explanation of two categories of ideas as raw materials of knowledge, i.e., simple and complex ideas (Banach 2004). A simple idea is defined as “one uniform Appearance, or Conception in the mind [that is] not distinguishable into different Ideas” (II.2.1). A complex idea, on the other hand, is received and constructed actively by the mind -- through the operations of combination, relation, and abstraction -- as an amalgam of simple ideas. Furthermore, his categorization of ideas is deepened with his focused exploration of language in “Book III,” on which we do not spend much space here.
“Book IV” demonstrates Locke’s view of knowledge, which he defines as “the perception of the connection and agreement, or disagreement and repugnancy of any of our Ideas” (IV.1.2). In such a process of perceiving, one can come to three different degrees or modes of knowledge, i.e., intuitive, demonstrative, and sensitive (IV.2). Intuitive knowledge, for Locke, is “the clearest, and most certain, that humane Frailty is capable of” (IV.2.1). Locke informs us that we can grasp the intuitive knowledge immediately, without any intervention of other ideas, so that we can come to “Certainty and Evidence of all our Knowledge.”
A second degree of knowledge is demonstrative knowledge, namely, "the shewing the Agreement, or Disagreement of two Ideas, by the intervention of one or more Proofs [or “intervening Ideas,” IV.2.3], which have a constant, immutable, and visible connexion one with another” (IV.15.1). Therefore, the truth of this kind of knowledge does not come immediately, as in the intuitive knowledge, but needs the intervention of other ideas. We can intuitively (and immediately) know -- without any help from other ideas -- that a car is not a tree, but we need some other intervening ideas to know, for example, that God exists. Demonstrative knowledge, therefore, needs to be “shewn to the Understanding, and the Mind made see that it is so” (IV.2.3). Locke calls this mediated process “reasoning” (IV.2.3). Though here Locke recognizes it as one degree of knowledge, in IV.14.4 he sha!rply distinguishes knowledge from judgment, in the sense that in the latter the agreement or disagreement is not perceived but rather presumed. For this reason, the association of the terms “knowledge” and “demonstrative” is ambiguous, since the truth of demonstrative knowledge is presumed, as a result of judgment and probability, rather than of certainty. In this kind of “knowledge” he puts morality and religious ideas, as well as mathematics. He maintains, “[H]erein lies the difference between Probability and Certainty, Faith and Knowledge, that in all the parts of Knowledge, there is intuition; each immediate Idea, each step has its visible and certain connexion; in belief, not so” (IV.15.3).
A third degree of knowledge, namely, sensitive knowledge, is not knowledge in the strict sense of the term. It deals only with “the particular existence without us.”
Locke’s Moral Philosophy
We now turn to the issue of morality in Locke’s Essay, especially in the context of the polemic against innate ideas. We have already seen that in order to “inquire into the Original, Certainty, and Extent of humane Knowledge,” Locke needs to counter his opponents. We can grasp easily that it is the innatism that he means by the “rubbish.” It is traditionally taken that Locke has René Descartes is mind when he rejects innate ideas (cf. Aaron 1955, 88ff; Colman 1983, 51f). This is either not fully correct or not fair to Descartes, since Descartes, in his Meditations, expresses puzzlement regarding the source of ideas. He writes:
Although, if it were true that Descartes is the real opponent of Locke's rejection, we still see a deep influence of Descartes on Locke’s theory of reason, idea and knowledge (Gibson 1968, 205-32; Rogers 1995, 49-67). Despite the unsolved question of who is the direct opponent of Locke’s polemic, it is clear that his “imagined opponent” is one that takes a naïve form of innatism and not a dispositional form of the theory (Yolton 1968). My particular concern here is with his arguments against the innate ideas and morality.
According to Locke, both speculative and practical principles are not innate. They are neither present consciously at birth, nor are they present implicitly (or unconsciously) at birth in order to be discovered by reason through learning in the future. With regard to the speculative innate principles (I.2), Locke presents the main argument of his opponents. They believe that speculative principles are innate because they receive universal assent. Against this argument Locke’s answer is straightforward, as can be seen in this modus tollens, proposed by Peg O’Connor (1994, 45):
1. If a principle is innate, there must be universal assent or agreement for that principle.
2. No principles receive universal assent or agreement.
3. Therefore, no principles are innate.
In order to prove his second premise, “No principles receive universal assent or agreement,” Locke shows the fact that many people, “at least one half of Mankind” (I.2.24), do not know the principles. Nevertheless, Locke anticipates the answer from his opponents that the principles are innate unconsciously until they are discovered by the use of reason. To this Locke responds that, if they were innate, the principles should be known immediately and cannot be dependent on the discovery by the reason.
Locke employs a similar argument with additional premises in rejecting the theory of innate practical principles or moral rules (I.2). His first basic assumption is that “the Actions of Men [are] the best Interpreters of their thoughts” (I.3.3). Based on the importance of conformity between rules and actions, Locke argues that moral principles are not native because, if they were so, they must guide the actions to be conformed to the principles. Locke also adds his second argument by assuming that if a moral principle were innate, it should be universally practiced. Thus, moral principles cannot be innate, since there is cultural disagreement on the principle from one to another place.
Locke’s campaign against innate morality does not necessarily mean he rejects any innateness within the human soul. In some places he acknowledges two kinds of innateness: rationalistic and hedonistic (Aaron 1955, 257ff; Yolton 1970, 145; Lamprecht 1962, chs. III-IV). With regard to rationalistic innateness, Locke recognizes the presence of “natural Faculties” within the human soul (I.2.1-2; I.3.13), which undoubtedly refer to human senses and reason. In I.3.26 he uses the term “reasoning Faculties of the Soul, which are almost constantly ... employ'd.” Thus, though Locke’s rationalism agrees with widespread emphasis on reason as the requirement of knowledge, it differs from other types of rationalism, in the sense that reason is useful insofar as it receives ideas from sensation and reflection (Lamprecht 1962, 65).
A second sort of innateness in Locke’s Essay is more explicitly stated. He maintains in I.3.3:
He also asserts that “things then are Good or Evil, only in reference to Pleasure or Pain” (II.20.2; cf. II.21.42). Here, Locke seems to accept a certain hedonistic innateness. However, he strictly distinguishes these “inclinations” or “tendencies” from the innate ideas. Colman rightly calls Locke’s version of hedonism as “a hedonistic theory of reasons for actions” (1983, 223). This is to say that, different from Hobbes’ egoistic and bodily hedonism, Locke’s is always related to human states of mind, because both tendencies “join themselves to almost all our ideas both of sensation and reflection” (II.7.2). His refusal of Hobbes’ hedonism is also clear from this statement,
In short, Locke’s hedonism allows him to accept the innate tendencies of achieving pleasure and avoiding pain insofar as they relate to experiential reasoning, sensation and reflection. The notion of happiness as the end of moral action authorizes Locke’s construction of ethics. Ethics is for him “the seeking out those Rules, and Measures of humane Actions, which lead to Happiness, and the Means to practise them” (IV.21.3).
Although Locke admits the (law of) Nature, which is also the law of God, as the source of human desire for happiness, he understands natural law in the Essay differently than in the earlier book, Essays on the Law of Nature (1664). In this he takes an absolutist position by holding the natural law as the objective standard for the good for man and thus for human actions to achieve it. In the Essay, on the other hand, he seems to summarize the natural law into two basic hedonistic motives: attaining happiness and avoiding misery. Although both hedonistic motives are indeed derived from Nature (I.3.3), we cannot ensure that the cause of pain and happiness for every person is the same. Also, the objects of happiness and pain are relative to the individual. His shift from an absolutist view of natural law to relativism is also supported by his view of the “law of opinion.” He distinguishes three kinds of law, i.e., the divine law, th!e civil law, and the law of opinion (II.28.7). Regarding the last law, he maintains,
It does not mean that Locke abandons the law of God as the source of practical rules of right and wrong (II.28.11). While he realizes that moral disagreement can happen, he also states that the “true ground of Morality” (I.3.6) and “the only true touchstone of moral Rectitude” (II.28.8) is the law of God. It is without doubt that Locke refers to the Christian God, “who sees Men in the dark, has in his Hand Rewards and Punishments, and Power enough to call to account the Proudest Offender” (I.3.6).
Locke is sharply aware that his campaign against innatism might be interpreted as a rejection of the God-given law of nature, especially since he argues that some of the moral principles are self-evident. To avoid this misinterpretation he maintains, “There is a great deal of difference between an innate Law, and a Law of Nature; between something imprinted on our Minds in their very original, and something that we being ignorant of may attain to the knowledge of, by the use and due application of our natural Faculties” (I.3.13).
Here, in sum, we see an ongoing struggle in Locke to reconcile his relativistic conception and the acceptance of morality as God’s law. Both elements, however, meet in his “hedonistic theory of reasons for actions.”
While the hedonistic motive in Locke’s theory has been explained above, a particular note on the “reasonableness” of morality should also be offered. We remember that for Locke morality is classified as demonstrative knowledge. The often-quoted statement, “Morality is capable of Demonstration, as well as Mathematicks” (III.11.16; cf. IV.3.18), is in practice very complicated. John W. Yolton concludes that Locke’s programme of demonstrative ethics fails. This failure, he continues, makes Locke move to another programme, in which he “[S]ettled for a haphazard listing of moral rules as required for illustration or for appeal to sanction some action” (1970, 172; cf. Reasonableness 188).
Aaron, Richard I. John Locke. Oxford: The Clarendon Press, 1955.
Banach, David. Locke on Ideas. Internet; http://www.anselm.edu/homepage/dbanach/13-LOCKE-ideas.htm; accessed September 15, 2004.
Colman, John. John Locke’s Moral Philosophy. Edinburgh: Edinburgh University Press, 1983.
Descartes, René. Meditations. John Veitch (tr.). Internet; http://www.wright.edu/cola/ descartes/meditation3.html; accessed September 15, 2004.
Fraser, Alexander C. “Notes.” In John Locke, An Essay Concerning Human Understanding, two volumes, New York: Dover Publication, 1959.
Gibson, James. Locke’s Theology of Knowledge and Its Historical Relations. Cambridge: The University Press, 1968.
Lamprecht, Sterling P. The Moral and Political Philosophy of John Locke. New York: Russell & Russell, Inc., 1962.
Locke, John. An Essay Concerning Human Understanding. Peter H. Nidditch (ed.). Oxford: Clarendon Press, 1970.
_____________. Essays on the Law of Nature. W. von Leyden (ed.). Oxford: The Clarendon Press, 1965.
_____________.The Reasonableness of Christianity with A Discourse of Miracles and part of A Third Letter Concerning Toleration. I.T. Ramsey (ed.). Stanford: Stanford University Press, 1958.
O’Connor, Peg. “Locke’s Challenge to Innate Practical Principles Revisited.” In The Locke Newsletter, Roland Hall (ed.). No. 25, 1994, 41-51.
Rogers, G.A.J. “Innate Ideas and the Infinite: The Case of Locke and Descartes.” In The Locke Newsletter, Roland Hall (ed.). No. 26, 1995, 49-67.
Yolton, John W. John Locke and the Way of Ideas. Oxford: The Clarendon Press, 1968.
_____________. Locke and the Compass of Human Understanding. Cambridge: The University Press, 1970.
Brandon Daniel Hughes, 2002
Locke was born during one of the most exciting periods in English history.
As a member of the minor gentry he was privy to many of the
intrigues surrounding the coronations of Charles II and James II.
His involvement in politics and his close working friendship with
the Earl of Shaftsbury, who schemed against monarchs with Catholic
tendencies, meant that his residency in England was never entirely secure.
He spent some of his most productive years in exile in Holland
(1675-1679, 1683-1688) and only returned to England from his second
continental sojourn on the heals of the Glorious Revolution.
His experiences as a political refugee led to the writing and
publication of some of his most influential works, The
Letter on Toleration (1689) and Two
Treatises on Government (1689). His
obvious talents allowed him to find patronage among the more progressive
families in England, and enabled him to live as an occasionally practicing
doctor, a sometime public servant, and a general man of letters.
As a committed Christian thinker and theologian,
Locke engaged the changing thought of his time with vigor.
He was an associate of Isaac Newton and dabbled in the sciences
while taking a degree in medicine. He
was also familiar with the governing rationalist philosophies of the
period and was responsible for seriously questioning, if not ending, the
dominance of Cartesian epistemology.
His Essay Concerning Human
Understanding (1690), which argued for the impossibility of innate
ideas and the necessity of the sensible reception of simple ideas, was
received with both shock and acclaim.
Locke presents his systematic theology in his The
Reasonableness of Christianity (1695).
However, the true virtues and larger significance of this piece are
only seen in relation to his earlier epistemological works.
Therefore, an exposition of Locke’s theology must begin with his
conception of the role and attainment of simple ideas.
Next, one must compare the roles of reason, knowledge and faith.
Additionally, the distinctions between natural reason and
revelation must be examined. Only then can one understand the intricacies of The
Reasonableness of Christianity.
In a reaction against the dominant rationalism of his time, Locke was concerned to show that none of our ideas come from innate ideas or first principles. In place of these ideas Locke offered experience. All of our knowledge is based on and “our observation employed either, about external sensible objects or about the internal operations of our minds perceived and reflected on by ourselves” (Locke, 1690: 122). Locke named these two roots of knowledge sensation and reflection, and he took them to be “the only originals from whence all our ideas take their beginning” (Locke, 1690: 124). Sensation and reflection, he argued, are the portals through which all ideas enter. Even memory is a mere retrieval of previous sensations or reflections.
Ideas themselves can be further divided into two
categories, simple and complex. Simple
ideas are the basic building blocks of all knowledge and each simple idea
is “one uniform appearance, or
conception in the mind” and cannot be further analyzed into smaller
pieces (Locke, 1690: 145).
These ideas only enter our minds through the sensations and can be
neither created nor destroyed (Locke, 1690: 145).
Complex ideas, on the other hand, are fashioned by the mind from
the raw material of simple ideas. Complex
ideas include relations like father and son, as well as ideas like horse
or castle, which the mind forms by combining simple ideas received through
the senses. One never sensibly receives the simple idea of horse or
castle. Rather, one takes the
simple ideas of brown or gray and combines them with other simples so as
to develop the idea of a horse
or a castle. Locke has
many further divisions and classifications for his simple and complex
ideas, but these two divisions are sufficient for understanding his
notions of knowledge, reason, and faith.
As far as Locke is concerned, there are three kinds of knowledge: intuitive knowledge, sensitive knowledge and demonstrative knowledge (Locke, 1690: 188). As he was often fond of repeating, “we have an intuitive knowledge of our own existence, and a demonstrative knowledge of the existence of a God: of the existence of anything else, we have no other but a sensitive knowledge; which extends not beyond objects present to our senses” (Locke, 1690: 212). Sensitive knowledge is the kind of knowledge derived by means of sensations, discussed above as the access point of simple ideas. Intuitive knowledge, on the other hand, takes care of a lot of Locke’s philosophical heavy lifting. About intuition he writes, “sometimes the mind perceives the agreement or disagreement of two ideas immediately by themselves, without the intervention of any other: and this I think we may call intuitive knowledge” (Locke, 1690: 176). The italicized phrase is extraordinarily important for Locke. If two ideas needed a third mediating principle, in order to be compared with one another in the mind, then he would have to account for the presence of such a third mediating idea, and it is unlikely that he could do so by means of sensation alone. Thus, he states that two ideas like black and white either do or do not match, and there is no explaining why. It is also important to note that in his chapter on wrong assent and error he lists several possible sources of error, but incorrect intuition is not listed as a possibility (Locke, 1690: 448).
The third type of knowledge, demonstrative knowledge,
also goes by its more common name: reasoning.
Where intuition fails (not in the sense of being in error, but in
the sense of lacking sufficient means to accomplish its task) reasoning is
called upon to connect or disassociate ideas that are in hidden agreement
or disagreement. Therefore,
when intuition cannot perform its task, the mind is called on, “by
the intervention of other ideas (one or more, as it happens) to
discover the agreement or disagreement which it searches; and this is that
which we call reasoning”
(Locke, 1690: 178). We
intuitively know that black is not white, but can we intuitively affirm or
deny the connection between the ideas “Plato” and “philosopher”?
Obviously in this case some intervening terms or ideas are needed
in order to see the connection of lack there of, and it is the mission of
reasoning to make such connections.
Before moving on to a discussion of Locke’s notion
of faith, one other peculiarity about knowledge needs to be noted.
Locke postulated different degrees of knowledge or certainty that
accompany each kind of knowledge. As
we saw above, Locke could be (and sometimes is) read as suggesting that
intuition cannot fail, which is to say that intuition is certain and thus
carries with it absolute probability.
Similarly, sensation would seem to yield a relatively high degree
of probability, since it is the foundation of all knowledge.
It would be difficult to find a reason to doubt one’s sensations,
since one’s doubts would themselves have to be based on sensational
knowledge, however, can easily go awry and thus is subject to different
levels of probability (Locke,
1690: 363). Yet, this is
exactly what one would expect. The
demonstrations or reasonings of a scholar should carry a higher degree of
probability than those of a child should.
Properly speaking, a probable demonstration is not an actual
demonstration. However, in a
nod to common experience, Locke recognizes that if we were always to
demand perfect reasoning and demonstrations, then humanity would never be
able to advance knowledge except at a snails pace.
Locke’s notion of faith emerges from the limits of intuition, sensation
and reason. He writes:
These two, viz. intuition and demonstration, are the
degrees of our knowledge;
whatsover comes short of one of these, with what assurance soever
embraced, is but faith or opinion, but not knowledge, at least in all general truths
(Locke, 1690: 185).
And herein lies the difference between probability
and certainty, faith and
knowledge, that in all the parts of knowledge there is intuition; each
immediate idea, each step has its visible and certain connexion: in
belief, not so (Locke, 1690: 365).
faith is neither certain nor necessary, it is merely a matter of
probability and, as will be seen below, people have different capacities
for judging faith’s probability. However,
the relativity of faith is not license, so Locke argues, to believe
anything one wants. There are
limits to faith. One cannot
have faith in a proposition that goes against the clear dictates of reason
or immediate intuition (Locke, 1690: 365).
One cannot have faith that black is white, since according to
intuitive knowledge this is not the case and intuition always yields a
higher probability than does belief, faith or opinion.
Nor can one have faith in a genuinely new simple idea, for simple
ideas are only attained through sensation (Locke, 1690: 418-427).
Just as sensation, demonstration and intuition all have their
sources, faith must also have a source, which is only to say that
propositions one has faith in must come from somewhere.
Therefore, Locke argues, the dependability of propositions of
faith, just like demonstrative propositions or simple ideas, depend on
their source (Locke, 1690: 416). Reasonable
faith, if one may use the phrase, is based on propositions that come from
the most dependable of sources: God.
moving on to discuss Locke’s conception of revelation we must first
address his ideas on the capacity of humans to get knowledge of God
through natural epistemological channels.
Locke writes, “having furnished us with those faculties our minds
are endowed with, he [God] hath not left himself without witness: since we
have sense, perception, and reason, and cannot want a clear proof of him,
as long as we carry ourselves
about us” (Locke, 1690: 306). Humans,
Locke believes, can come to knowledge of God by means of comparing and
contrasting our simple ideas and building more and more complex ideas
until finally through a lot of hard work we manage to come to the idea of
“a supreme Being, infinite in power, goodness, and wisdom, whose
workmanship we are, and on whom we depend.” (Locke, 1690: 208)
Locke is clear about this point in multiple places.
The mental powers of the individual are sufficient for coming to
sure knowledge of God. However, Locke is equally insistent that human reason often
goes awry. It often fails
through lack of the proper simple ideas, poor reasoning and the like. Humanity would be in much better shape were there some
further guarantee that real knowledge of God could be attained.
a final observation concerning Locke’s thought in Essay
Concerning Human Understanding, one should note that Locke held direct
revelation from God to be of the highest probability.
Unfortunately, while the revelation may be of the highest
probability when God is its source, one is never entirely sure that God is
the genuine source of such revelation (Locke, 1690: 383).
Therefore, the content of such a revelation would seem to be merely
probable so long as there is no way of guaranteeing that God is the
source. This is the issue he
addresses in The Reasonableness of
Reasonableness of Christianity recapitulates the entirety of Christian
salvation history from Adam all the way to the final judgement.
The text is replete with biblical references and anecdotes.
However, throughout the text Locke never looses sight of his
primary goal: to demonstrate that Jesus is a legitimate spokesperson for
God and that, therefore, his teachings can be taken as revelatory
propositions of the highest possible probability.
There are two movements in the text.
In the first movement, Locke makes the case that Jesus is the agent
of God. In the second, he
attempts to lay out the content of Jesus’ teaching and argues that it is
consonant with the conclusions of natural reason.
reasons are offered for why Jesus is thought to speak with the authority
of God. First, Jesus fits the ancient prophecy concerning the Messiah
(Locke, 1695: 32). Locke is
unclear as to why this demonstrates that Jesus is God’s messenger.
The validity of this demonstration would seem to be not just a
matter of Jesus’ fitting the Messianic bill, but also of the Messianic
role being demonstrably connected to the idea of God’s true messenger.
In the end, this demonstration seems to rest on its status as a
kind of miracle; i.e. it is miraculous that Jesus so perfectly fulfilled
Messianic expectations. (If
this is the case then Locke is in fact offering one rather than two
arguments for revelation through Jesus.)
second reason offered for Jesus’ authority is that he performed
miracles. In Locke’s A
Discourse of Miracles he defines a miracle as “a sensible operation,
which, being above comprehension of the spectator, and in his opinion
contrary to the established course of nature, is taken by him to be
divine” (Locke, 1701: 79). It
may at first seem odd for Locke to define a miracle relative to the
opinion and comprehension of the observer, but if one recalls the problem
of faith and revelation which was left unsolved in Essay
Concerning Human Understanding, then his definition begins to make
sense. The miraculousness of
a miracle does not stem from its being a violation of nature.
Rather, the miraculousness comes from its providing a high degree
of probability that the performer of the miracle speaks for God and that,
thereby, his or her words can be given “the highest degree of our
assent” (Locke, 1690: 383). Thus, the content of Jesus’ message can be taken as
entirely probable since his miracles earn him recognition as God’s
then is the content of Jesus’ and God’s message to humanity?
Even before listening to a word Jesus says, one already knows that
nothing Jesus teaches will contravene the dictates of reason, nor will it
offer any genuinely new simple ideas.
Locke argues that Jesus primarily teaches two things.
First, he teaches that he is the Messiah (Locke, 1695: 39).
This is a reasonable proposition since it amounts to the claim that
God cared enough about humanity to send a messenger.
Thus faith in the fact that Jesus is the Messiah is tantamount to
believing that God desires to save humanity. Second, Jesus teaches
repentance followed by righteousness (Locke, 1695: 45, 52).
This means that one must lead a good life.
According to Locke, such a proposition also dovetails nicely with
in Jesus is as necessary for salvation as is righteousness, but Locke
hints at two provisos to this statement.
First, since Jesus’ role is simply to reveal God to people, it
would seem that faith in Jesus could easily take the form of faith in the
certainty of God’s love and mercy.
It is, for instance, not clear in The
Reasonableness of Christianity why Moses, who also performed miracles
and spoke for God, is not also a reasonable and equivalent candidate for
faith. Second, God reveals
knowledge of God to humans through Jesus in order to help those who cannot
attain such knowledge by reason alone.
However, it is not at all apparent that those who can attain
knowledge of God’s love and mercy as well as knowledge of the obligation
to live a righteous life cannot be saved through that knowledge in tandem
with righteous living. Locke
seems to leave this possibility open when he talks of the distinction
between what is true and what is necessary for salvation. (Locke, 1695: 75)
Locke’s theology is therefore an attempt to preserve Christian doctrine and revelation along side of a relatively newfound optimism about the potential of human reason to fathom the ways of God, while simultaneously advocating an empiricist notion of knowledge and experience.
(1690) Essay Concerning Human Understanding Vol. I and II. Oxford: Oxford at the Clarendon Press, 1894.
Originally Published: London, Eliz Holt, first edition (1690),
second edition (1694), third edition (1695), fourth edition (1700).
(1695) “The Reasonableness of Christianity” The Reasonableness of
(1701) “A Discourse of Miracles” The
Christianity with A
Discourse of Miracles and part of A Third Letter Concerning Toleration.
Ed. I. T. Ramsey. Stanford: Stanford University Press, 1958.
Originally Published: London, Charles Harper, first edition (1701),
second edition (1702).
Marylu Bunting, 2000
Locke was born August 28, 1632 into a family in the lower gentry in
Somerset, England. While his father did own some land outside of Pensford
(a small town not far from Bristol), the family was of modest means and
Locke’s father had to supplement their income through the practice of
law and the fulfillment of different administrative positions. Ironically,
Locke’s prospects changed with the coming of the Civil War in which his
father served under and befriended Alexander Popham, a much more prominent
personage within the Somerset gentry. Popham was a member of parliament
representing Bath, and after the dispersion of the regiment, Locke’s
father continued in Popham’s service as an administrative aid. Via
Popham’s influence and connections, Locke gained a scholarship to the
Westminster School—an institution at the time considered one of the best
preparatory schools in England. Here Locke gained the knowledge of Latin,
Greek, and Hebrew that would serve him well in his later endeavors,
especially those in biblical hermeneutics.
In May of 1652, Locke was elected to a studentship (the equivalent of a fellowship in other colleges) at Christ Church, Oxford. The studentship could be held for life, but could be forfeited in certain circumstances, for example, if the holder became married. Locke was not hindered by this primary condition, but did have his studentship revoked when he later became embroiled in the Whig resistance to King Charles II. Locke earned his BA in February of 1656 and MA in June of 1658. For the next eight years, he was primarily employed in different posts at the college including praelector of Greek (1661-1662), praelector in rhetoric (1663), and censor of moral philosophy (1664-66).
most Oxford graduates were headed for careers in the church, Locke turned
his attentions to medicine in the later 1650s. His notebooks from this
period contain detailed references both from lectures and independent
reading in medical science. Locke also began to read natural philosophy,
especially that of Boyle and Descartes. He followed his interest in
medicine into a collaboration with Thomas Sydenham, a practicing London
physician, on several medical tracts, including “De Arte Medica.” This
work was once thought entirely composed of Locke’s own medical notions
because the manuscript is in his hand, but is now suppose to be in the
greater part Sydenham’s (Milton, 1994, 9).
Locke’s medical interests also lead him deeper into political
interests when he gained the acquaintance of Anthony Ashley Cooper, Lord
Ashley (later to be Earl of Shaftesbury) who was in the care of another of
his medical collaborators, David Thomas. Locke and Shaftesbury upon
acquaintance in 1666 quickly became friends and Locke moved to London in
the spring of 1667, making Shaftesbury’s Exeter House his home and
becoming Shaftesbury’s medical caretaker and administrative assistant.
was Chancellor of the Exchequer (the rough equivalent of the finance
minister) and a leading figure in Whig politics. He also became King
Charles II primary foe when he and other radical Whigs attempted to block
Charles’ Catholic brother James I from ascending the throne. After
Charles disbanded parliament and dissolved the replacement parliament
after one week’s service, Shaftesbury’s resistance turned from
constitutional means to the contemplation of insurrection. When their
plans became known and Shaftesbury was twice charged with treason in 1682,
he, Locke, and their associates fled to Holland, where Shaftesbury died in
the time of his flight to Holland, most scholars believe that Locke had
already written the better portion of the Two
Treatises of Government and had begun several drafts of his Essay Concerning Human Understanding. Locke’s three year trip to
France during the mid-1670s had given him time away from government
responsibilities to more thoroughly develop his own philosophy, reading
much of Descartes and writing of philosophical matters almost daily in his
journal. Locke returned to England in 1878 just as Shaftesbury’s
controversy was beginning, only a few months prior to the disbanding of
the Oxford (replacement) parliament and Shaftesbury’s subsequent turn to
plans for insurrection. For the next four years, he was caught up almost
entirely in Shaftesbury’s political world.
Once in Holland, however, Locke met with other dissenting exiles who encouraged both his political and philosophical interests supporting him as he worked to revise and produce most of his major works including the Two Treatises, the Essay, and the first Epistola de Tolerantia. Jean Le Clerc was particularly important among these fellow dissenters. He was the religiously unorthodox publisher of the new journal Bibliothque Universalle et Historique that eventually published a ninety-two page abridgment of Locke’s Essay and made this printed excerpt available to Locke for distribution among his friends and colleagues in England and in exile, before it was otherwise available. In May 1685, Locke’s association with English dissenters landed his name on a list of those to be arrested, after which Locke’s studentship at Christ Church was revoked and Locke went underground in exile, living under sometimes not-so-creative pseudonyms.
the Glorious Revolution of 1688 and the return of William of Orange as
King, Locke was able to return to England as well. He was almost
immediately offered a post as ambassador to the elector at Brandenburg. He
turned down this post, however, officially because of his poor health (he
suffered from asthma all his life), but unofficially because the proper
fulfillment of the duties of the post would require the imbibing of much
alcohol and Locke no longer took alcohol except for medicinal purposes
(Milton, 1994, 17). No longer with the assistence of his studentship and
without a desire to return to the familial holdings in Somerset, Locke
cast about for a period without a permanent residence in England. In the
first part of 1691, Locke took up permanent residence at Oates at the
invitation of Damaris Masham, his former love and long-time correspondent.
Masham was the daughter of prominent Cambridge Platonist Ralph Cudworth
through whom she and Locke had met in 1681.
was a momentous year for Locke. Upon his return to England, he published
the Two Treatises in October.
The Epistola de Toleranta also
appeared in October in William Popple’s translation from Locke’s
original Latin text. These two were published anonymously. In December,
the Essay Concerning Human Understanding appeared in Locke’s own name.
While the Two Treatises and the Essay
were met primarily favorably, the Epistola
sparked controversy. Jonas Proast, an Oxford cleric, immediately attacked
Locke. This attack joined the two men in a correspondence that lasted into
1692 with a series of two letters from Proast and two responses from Locke
in vindication of tolerance for all Christians with the notable exclusion
of atheists and Catholics as threats respectively to the moral and
Also in 1689, Locke made the acquaintance of Newton and the two struck up a correspondence on their primary shared interest, biblical interpretation. Newton, however, proved more interested in the book of Revelation than Locke would ever come to be. Still, the subject of their correspondence suggests Locke’s growing concentration on the matter of specific Christian biblical and theological interpretation. This concentration lead to the composition and publication of the last of Locke’s major works which include the Reasonableness of Christianity and the Paraphrase and Notes on the Epistles of St, Paul. The first appeared anonymously in 1695, while the second only appeared in full posthumously.
advising the government along with Newton during the crisis of England’s
silver in 1695, Locke spent the remainder of his public life in civil
service at the Board of Trade from 1696 to 1700. Here he dealt with a
broad range of issues from the colony of Virginia, to piracy on British
ships, to linen manufacture in Ireland. During this period, Locke also
undertook a defense of his Reasonableness
and Essay against the attacks of the Bishop of Worcester, Edward
Stillingfleet’s Vindication of the
Doctrine of the Trinity, which charged that these works lead to
Socinianism and Unitarianism. Stillingfleet’s attack was not provoked
primarily in response to Locke’s own work but in response to Tollard’s
1696 Christianity not Mysterious which was significantly more rationalist
than Locke’s works but which took over Locke’s theory of knowledge
from Book IV of the Essays
almost entirely. This exchange like that with Proast before consisted of a
series of two exchanges between Locke and Stillingfleet, which ended with
Stillingfleet’s death, March 27, 1699. In part in response to this
exchange, however, Locke added two chapters to the 1698 fourth edition of
the Essay. These included the chapters “Of the Association of
Ideas,” and “Of Enthusiasm,” both of which deal more thoroughly with
the question of the causes of error within the operation of reason. After
1700, Locke retired almost completely to Oates and took up work in earnest
on his Paraphrase. He died
around 3:00 p.m. on October 28, 1704 as Lady Masham read the psalms to
him. During his last years he was with Newton one of England’s as well
as Europe’s most famous intellectuals.
his biography shows, Locke’s interests were broad and far reaching.
During his life he published in many areas including medicine, economics,
education, government, religious toleration, Christian doctrine, and
biblical interpretation. His major influence however was primarily in
government with the Two Treatise of
Government; in religious toleration with the Epistola
de Tolerantia and its three later sibling letters; in philosophical
epistemology with the Essay
Concerning Human Understanding; and in Christian theology and
hermeneutics with the Reasonableness
of Christianity and the Paraphrase
and Notes on the Epistles of St. Paul.
the Two Treatises, of which the
second is the more influential, Locke argued against patriarchal monarchy
that humans (or man, as Locke wrote) are free and equal in a state of
nature. By nature, no one is necessarily sovereign over anyone else.
Because of the deficiencies of the natural state—such as the human
tendency to harm others and their property, as well as,
to unjustly punish each other when such harm occurs—individuals
contract with each other to form civil society and with it government. The
end of government is to secure the natural rights of its citizens and to
judge those who transgress these rights. This conception of government
contains within it the justification of revolt, for if the government does
not live up to its contractual obligations of protection and judicious
punishment of offenses, then the individuals making up society have the
right to dissolve the government in order to form one that will live up to
its contractual obligations. While many initially believed the
Two Treatises to have been written with the express purpose of
supporting the Glorious Revolution, most scholars now hold that they were
already in nearly their present form prior to Locke’s exile in Holland
and thus underwent only minor revisions before their appearance in 1689
after the victory of William of Orange.
the Epistola de Toleranta, Locke
undertook an analysis of the nature of belief and argued from this
analysis that toleration is appropriate in matters of belief that neither
undermine the moral quality of society nor the authority of the duly
constituted government. Belief, he said, is a matter which can neither be
commanded not submitted to the authority of the government, whose
responsibility extends only to the preservation of rights including the
right to property and not to the salvation of individuals. Individuals
must judge in the power of their reason what beliefs will be controlling
for their lives. Truth gives its own recommendation. Atheists and
Catholics were, however, excluded from Locke’s extension of toleration.
Atheists, Locke held, by denying God, deny the foundation of morality and
are thus a threat to civil society. Catholics by accepting the pope as the
only supreme authority undermine the authority of the government and thus
cannot be extended toleration in Locke’s view. The immediate context of
Locke’s composition of the Epistola
was the revocation of the Edict of Nantes in October 1685 which had
extended limited toleration to non-Catholics in France.
Essay Concerning Human Understanding took shape over many decades of
his life beginning sometime during his studentship at Christ Church and
undergoing numerous revisions and transformations until its publication in
1689 and again with the publication of the second edition in 1694 and the
fourth in 1698. The third edition contained no major revisions. Locke was
trained, as all university pupils then were, in the medieval scholastic
tradition of disputation which demanded multi-layered, sophisticated
textual analysis of the historical philosophical and theological
tradition. Locke and his contemporaries came to feel however that this
method of inter- and intra-textual criticism was not productive in the
pursuit of useful knowledge, but rather left one repeating the mistakes
and misjudgments of the past. In an apocryphal conversation at Earl
Shafetsbury’s in 1670, Locke found he and his colleagues embroiled in
seemingly endless difficulties and began to think that what was needed was
not so much the assertion of truths understood, but an analysis into the
process of understanding itself including especially its capacity and
limits (Locke, 1689a, The Epistle to the Reader).
Essay which finally appeared
undertook this task within the context of an argument against Cartesian
and Platonist innate ideas and an argument for the proposal that all
knowledge is gained from experience. Locke included within experience both
experience of the world via the senses and reflective experience of the
operations of the understanding itself which he called “internal
sense.” He held that humans have access only to their own ideas of the
world—which ideas he defined as “whatsoever is the object
of the understanding” (I. i. 8). Ideas come in both simple and complex
forms. Simple ideas come to humans passively via the senses and are
indivisible. Complex ideas are mainly produced actively by humans’
reflective activity and involve the coordination, relation, and
combination of more than one simple idea. Moreover, our ideas are of both
primary and secondary qualities of bodies. Primary qualities are those
which really exist in the body, such as solidity, extension, figure,
motion and number. Secondary qualities are those which are the powers of
the primary qualities to affect the human senses, such as color, taste,
and sound. Locke held that these are not really existent in the bodies but
are a result of the human capacities to affectively perceive them (II. i.
knowledge of the world thus comes in various degrees of certainty moving
from intuitive knowledge which immediately discriminates likeness and
dissimilarity of ideas to demonstrative knowledge which requires reasoned
thought in the form of argumentary proofs to discriminate its probable
veracity. All certainty depends on intuitive knowledge (IV. ii. 1.), while
we can only have lesser and greater degrees of probable knowledge from
demonstrative knowledge. Demonstrative knowledge can always be thrown into
suspicion by subsequent demonstration, whereas intuitive knowledge is
immediately perceived (IV. ii. 1-2).
Locke wrote, “Intuition and demonstration are two degrees of
knowledge—whatsoever comes short of one of these, with what assurance
soever embraced, is but faith or
opinion, but not knowledge...”
(IV. ii. 14). While later interpreters would see Locke’s epistemology in
his Essay as undermining the
grounds of faith, Locke himself saw it as a support to faith since it
delimited the capacities of knowledge and gave clarity as regards what can
indeed be known. If certain matters cannot be known with certainty in
Locke’s sense, they can still be believed and this belief can itself be
grounded, according to Locke, in good reasoning in the extent that is
possible. Unlike later users of his epistemology, Locke held that God was
among those realities about which one could have knowledge. In fact, after
the intuitive certainty of one’s own existence, Locke held that one
could be most certain of God’s existence via the demonstration from
finite to infinite causes, and the demonstration from knowledgeable and
moral creatures to a most knowing and moral being (IV. x. 1-6). Locke’s
epistemology in the Essay also
left room for revelation as knowledge which can be confirmed by reason. He
wrote of revelation as “natural reason enlarged by a new set of
discoveries communicated by God immediately, which reason vouches the
truth of by the testimony and proofs it gives that they come from God”
(IV. xix. 4).
estimation of the value of revelation “vouched” for by reason is clear
in both the Reasonableness of
Christianity and the Paraphrase
and Notes on the Epistles of St. Paul. In the former he argues that
the Christian account of salvation through Christ is both reasonable and
necessary given the fall of Adam. He holds that humanity lost immortality
through Adam’s fall and that immortality which is life is restored
through Jesus Christ in the form of the resurrection promised to Christian
believers. Opening the way for considerable difference of opinion within
Christian doctrine, he argued that the only thing which Jesus and thereby
God requires for justifications is the firm belief in Jesus Christ as the
Messiah. Attendant to this belief will be sincere repentance and moral
reform in keeping with the taking of Jesus as Lord, but in specifics all
that is required is this firm belief in the Messiahship of Christ. Locke
traces the theme of Jesus Christ as the Messiah through the four Gospels
and Acts, arguing that this is the sole proclamation of Jesus himself, of
the disciples, and of the gospel writers. Locke’s Christ returns the
potential for immortality to humanity and again demonstrates the proper
morality of natural law for humanity, especially the poor and unlearned
who cannot reason out the natural law for themselves due to the extremity
and necessities of their circumstances.
the later of his religious works, the Paraphrase
and Notes on the Epistles of St. Paul Locke argues that one must
understand each of the statements of Paul within the context of the letter
in which it occurs. He notes what he took to be the difference between the
Epistles as occasional letters provoked by specific situations occurring
within maturing Christian communities and the gospels as universal
declarations meant to teach new Christians and spread the proclamation of
the Messiahship of Jesus Christ to non-believers. The material in the
Epistles is therefore not as binding as that in the gospels but is still,
on Locke’s view, potentially very important for contemporary Christians
as they attempt to lead the moral life to which Jesus calls them.
Epistola de Tolerantia (Gouda);
tr. as Letter on Toleration, by
William Popple (London); 2nd ed. (1690)
Two Treatises of Government
(London); 2nd ed. (1694); 3rd ed. (1698).
An Essay Concerning Human
Understanding (London); 2nd ed. (1694); 3rd ed
(1695); 4th ed. (1700); 5th ed. (1706).
The Reasonableness of Christianity,
As Delivered in the Scriptures (London); 2nd ed. (1695).
A Discourse on Miracles
A Paraphrase and Notes on the
Epistles of St. Paul. 6 vols. (London).
of Secondary Sources
M. Locke. London: Routledge,
. “Locke, John,” in Routledge
Encyclopedia of Philosophy, 1st ed. 1998.
V. The Cambridge Companion to Locke.
Cambridge: Cambridge University Press, 1994.
M. John Locke: A Biography.
London: Longmans, 1957.
I. The Mind of John Locke. Cambridge:
Cambridge University Press, 1994.
J. John Locke: Resistance, Religion,
and Responsibility. Cambridge: Cambridge University Press, 1994.
J.R. “John Locke’s Life and Times.” In The
Cambridge Companion to Locke, ed V. Chappell, 5-25. Cambridge:
Cambridge University Press, 1994.
J.S. and Yolton, J.W. John Locke: A Reference Guide. Boston, MA: G.K. Hall, 1985.
J.W. John Locke and the Way of
Ideas. Oxford: Oxford
University Press, 1956.
Slavica Jakelic, 1998
British philosopher of empiricism. His writings range from epistemology, political and moral philosophy, to education.
Locke was born in Wrington, Somerset, in a Puritan family. Due to his fathers involvement in the struggle between Parliament and Charles I, Locke became acquainted very early with the political turmoil in England. He witnessed the most dramatic moments in English history: the execution of Charles I and the rule of Cromwell. He welcomed the Restoration and then contested the reigns of Charles II and Catholic James II. In 1688, Locke advocated the Glorious Revolution that restored the Protestant constitutional monarchy.
At Westminster School, Locke spent six years learning classical languages. He continued his education at Oxfords prestigious Christ Church College. After graduation in 1656, he continued his masters degree and became a Tutor and Censor of moral philosophy.
Locke was always dissatisfied with the scholastic teaching methods and contents, and this will be reflected in his writings on education. At Oxford, however, he had an opportunity to study new sciences, as well as experimental and empirical methods. This was decisive for his choice of medical studies as opposed to a clerical career.
In 1662, Locke met Lord Ashley, who was to become the Earl of Shaftesbury and Lockes close friend. In 1668, Locke accepted the position of Earls personal physician. By entering Shaftesburys home, Locke found himself in the epicenter of political and intellectual life in England. He was soon elected into the Royal Society of London, which was formed in the 1660s as an anti-traditionalist institution for the Improving of Natural Knowledge (Woolhouse 1988, 68).
Shaftesburys political and economic activities directly affected Lockes political ideas and personal life. When Shaftesbury disapproved of Charles IIs politics to grant the Roman Catholics the right to succeed the British throne, he was forced to flee England. Fearing for his life, Locke followed Shaftesbury to Holland and lived there in exile from 1683 1689. These years were the beginning of the most prolific period in Lockes career.
Lockes philosophical thinking was influenced by a group known as the Cambridge Platonists. Locke included their idea of reason as natural light given by God into his Essay Concerning Human Understanding. In his earlier work, Essay on the Law of Nature (1664), Locke wanted to reconcile the ideas of Cambridge Platonists with Thomas Hobbes sensationalism. According to Lockes private correspondence, he was largely indebted for his philosophical thoughts to two French philosophers, Rene Decartes and Pierre Gassendi. The latter encouraged Lockes questioning of the innate ideas.
Locke's Main Works
Locke's principal works were An Essay Concerning Human Understanding (1689/1690); Two Treatises of Government (1690); Letters Concerning Toleration (1689-92, First Letter Concerning Toleration was published anonymously as Epistola de Tolerantia in Holland); Some Thoughts Concerning Education (1693); The Reasonableness of Christianity (1695).
Lockes philosophical ideas are best presented in his Essay Concerning Human Understanding. He started working on the essay as a response to group discussions about the "principles of morality and revealed religion" (Locke 1969, 4). Locke had written that in these discussions he came to a conclusion that before starting any inquiry about the complex matters, we must "examine our own abilities, and see what objects our understandings were or were not fitted to deal with" (4).
The essay attempts to answer the questions of origin, extent, and certainty of human knowledge. In the first book, Locke repudiates the doctrine of the innate principles. He protests against the assertion that human beings are born with certain ideas in their souls. He then declares that all our knowledge is founded and ultimately derives from experience. This sentence marked Locke as the founder of British Empiricism.
However, Lockes empiricism did not identify sensations with knowledge. In the Essay, Locke only frames his discussion with the idea that our sensations are the origin of our knowledge, as well as the source of its certainty. He says that sensations are the ones that "convey into the mind several distinct perceptions of things" (43). But, besides the perception of external things, Locke speaks of the second source of our ideas, that is, the operations of our mind "thinking, doubting, believing, reasoning, knowing, willing" (44).
Locke further emphasizes that the ideas of our mind are not yet the content, but merely the materials of knowledge. The ideas are defined as "whatever it is which the mind can be employed in thinking" (16). If the mind receives the ideas directly from our sensations, we speak about simple ideas. If the mind generates the ideas, then we speak about general ideas, or "abstract ideas" (229). The more general ideas are, the more incomplete and partial.
In the process of receiving and generating particular and general ideas respectively, our mind produces the knowledge of the qualities of things. These qualities of things are primary if they mechanically manifest themselves as existing in the body of things (original qualities such as form, or motion). The qualities of things are secondary if we denominate them from the things (for instance: sweet, blue, warm). Evidently, Locke establishes a correlation between simple ideas and primary qualities of things, and general ideas and secondary qualities of things. The first correlation is directly produced by senses; the second is created in our mind.
Locke argues that the effect of senses and the operations of our mind lead our mind to perceive "the connexion and agreement, or disagreement of any of our ideas" (255). This process in which our mind establishes some form of connection among ideas is what Locke defines as knowledge.
Locke says that our mind establishes different kinds of relations and, accordingly, produces three different degrees of knowledge. Sensitive knowledge is based on simple ideas and produced by sensations. Demonstrative and intuitive knowledge is achieved when the mind establishes connections among ideas. Intuitive knowledge is knowledge of our own existence; demonstrative of Gods existence.
Although Locke declared that our knowledge originates in senses, he never ignored the role of reason and reasoning in the process of knowing. He did state that knowledge is possible and that it originates in experience. However, with this first statement, Locke answered to skepticism. With the second, he discharged the rationalistic argument that reason without experience can be the foundation of knowledge. In other words, Locke never asserted that sensational origins of knowledge guarantee its absolute certainty. On the contrary, he implied that our mind had freedom to establish connections among ideas and to err. Finally, Locke said that any general knowledge must begin from particular ideas. His empiricism, therefore, was built up of many building blocks, for which the sensations were only the necessary foundation.
Lockes philosophical thought determined the development of British philosophy toward radical empiricism. George Berkeley and David Hume both followed Lockes ideas but also tried to remove from them everything that was not directly founded on sensations. This approach led them into skepticism about the things that we accept as common knowledge.
Immanuel Kants philosophy attempted to bridge the gap between Lockes empiricism and Decartess rationalism, experience and a priori knowledge. However, Lockes "new way of ideas" has never ceased to be the intellectual challenge for the Western philosophers.
Locke wrote his Two Treatises of Government in order to "establish the throne of our great restorer, our present king William" (Lamprecht 1928, xxxvii). The first treatise was written as a document against Robert Filmers Patriarcha. Filmer argues that state and government have divine right to rule over people. The character of the states authority is paternal and a prolongation of Adams absolute authority over Eve.
Locke answers by making a distinction between public and private authority. He says that the state has right to intervene in the public sphere, but only if it recognizes the autonomy of the individuals. The state is for Locke a public and legal institution.
The second treatise is Lockes defense of the constitutional government. Locke here speaks about the state of nature as a pre-political condition, in which people had a natural right to life, freedom and private property. However, the law of nature was not always respected because human beings are not necessarily guided by reason. In order to protect their natural rights, people associate on the basis of social contract into a political society. Lockes idea of social contract gives the ultimate power to the community of people, not to the government. People have no right to rebel against the government, but they do have a right to defend the principles of social contract against tyranny.
Ideas about Religion
In the Epistola de Tolerantia, Locke makes a distinction between the spheres of state and church. He defines the state as a secular tool for securing the civil interests "life, liberty, health, , and the possession of outward things" (Locke 1965, 108). The care of souls belongs to church -- "a voluntary society of men, joining themselves together in order to the public worshiping of God" (111). No church has the right to impose its beliefs or rituals on anyone who does not voluntarily accept them. On the other hand, the state needs to provide the freedom of worship for all religious societies equally. Yet, the state should not tolerate if some church is destructive for the whole society. Locke used this argument in the time of struggle between Shaftesbury and Charles II. He stated that Roman Catholics could not be tolerated "because their opinions were destructive to all governments except the Popes" (Cranston 1965, 8).
The three subsequent letters on toleration were Lockes reply to Jonas Proasts criticisms. Proast claimed that civil authorities could punish people if they refused to accept the Christian religion.
Lockes discussion on the Reasonableness of Christianity is usually not considered as very representative for his philosophical ideas. The main purpose of this work is to show that Christianity is not a fanatical or purely emotional tradition, but that its truths and moral norms are in accordance with the law of nature. Locke tries to rationally present the character of Christian teaching; he even situates Christianity in the historical context. Ultimately, Locke does differentiate between reason and faith, and argues that human reason cannot fulfill the commands of Christianity without the truth of revelation. The philosophical basis for these ideas is in Lockes distinction between the propositions that are according to reason (the existence of God), contrary to reason (the existence of more than one God), and above reason (the resurrection of the dead). The first type of propositions we get from sensations and reflection; the second type of propositions is contrary to the ideas produced by sensations and reflection. Those propositions that our reason cannot deduce from our sensations or reflections are above reason, and their source is revelation.
Ideas about Education
In his Thoughts on Education, which were published as a compilation of his personal letters, Lock asserts that the main purpose of education should not be the formation of scholars but the "formation of the whole man" (Caponigri 1963, 315). Lock advocates the respect for individuals talents, and demands physical exercise and open air. He believes that education needs to develop "a sound mind in the sound body," which clearly follows some of the ideals of Greek education described in Platos Republic (Caponigri 1963, 315). We can understand the uniqueness of Lockes ideas about education only if we acknowledge that he was the first to articulate them. Lockes Thoughts on Education made a tremendous impact on the later understanding of educational methods and contents.
In England and France, Lockes Essay Concerning Human Understanding became "the philosophical Bible" (Pringle-Pattison 1969, X). Yet, Lockes fame was never an obstacle for his intellectual modesty. He was known as a pious, pragmatic and witty person. He strongly believed that human understanding should be directed toward what is important for human conduct. Since Locke as a philosopher never became detached from practical life, his political writings both theoretically articulated and reflected his direct political engagement. His ideas about education resulted from his practical involvement in the educational process.
Lockes strong devotion to Christian faith and loyalty to the philosophical quest for truth framed his discussions of religion and tolerance. Locke died in Essex in 1704. He never married.
Works CitedPrimary Sources
John Locke. 1969 [1689/1690]. An Essay Concerning Human Understanding. Oxford: Clarendon Press.
John Locke. 1996 . The Reasonableness of Christianity. Stanford: Stanford University Press.
John Locke. 1965 [1689-1692]. "A Letter Concerning Toleration" in Locke on Politics, Religion and Education. New York: Collier Books.
John Locke. 1965 [16??]. "The Second Treaties on Civil Government," in Locke on Politics, Religion and Education. New York: Collier Books.
Works CitedOther Books
Caponigri, A. Robert. 1963. A History of Western Philosophy. Notre Dame: University of Notre Dame Press.
Cranston, Maurice. 1965. "Introduction" to Locke on Politics, Religion and Education. New York: Collier Books.
Hamilton, D.W. 1967. "Empiricism" in the Mircea Eliade, ed., Encyclopedia of Philosophy, Vol. 2. New York: The Macmillan Company.
Lamprecht, Sternling P. 1928. Locke Selections. New York/Chicago/Boston: Charles Scribners Sons.
Pringle-Pattisno, A.S. 1969. "Preface" to An Essay Concerning Human Understanding. Oxford: Clarendon Press.
Woolhouse, R.S. 1988. The Empiricists. Oxford/New York: Oxford University Press.
The information on this page is copyright ©1994 onwards, Wesley Wildman (basic information here), unless otherwise noted. If you want to use ideas that you find here, please be careful to acknowledge this site as your source, and remember also to credit the original author of what you use, where that is applicable. If you want to use text or stories from these pages, please contact me at the feedback address for permission. | http://people.bu.edu/wwildman/bce/mwt_themes_440_locke.htm | 13 |
16 | The frontispiece of Sir Henry Billingsley's first English version of Euclid's Elements, 1570
|Author(s)||Euclid, and translators|
|Language||Ancient Greek, translations|
|Subject(s)||Euclidean geometry, elementary number theory|
|Pages||13 books, or more in translation with scholia|
Euclid's Elements (Ancient Greek: Στοιχεῖα Stoicheia) is a mathematical and geometric treatise consisting of 13 books written by the ancient Greek mathematician Euclid in Alexandria c. 300 BC. It is a collection of definitions, postulates (axioms), propositions (theorems and constructions), and mathematical proofs of the propositions. The thirteen books cover Euclidean geometry and the ancient Greek version of elementary number theory. The work also includes an algebraic system that has become known as geometric algebra, which is powerful enough to solve many algebraic problems, including the problem of finding the square root of a number. With the exception of Autolycus' On the Moving Sphere, the Elements is one of the oldest extant Greek mathematical treatises, and it is the oldest extant axiomatic deductive treatment of mathematics. It has proven instrumental in the development of logic and modern science. The name 'Elements' comes from the plural of 'element'. According to Proclus the term was used to describe a theorem that is all-pervading and helps furnishing proofs of many other theorems. The word 'element' is in the Greek language the same as 'letter'. This suggests that theorems in the Elements should be seen as standing in the same relation to geometry as letters to language. Later commentators give a slightly different meaning to the term 'element', emphasizing how the propositions have progressed in small steps, and continued to build on previous propositions in a well-defined order.
Euclid's Elements has been referred to as the most successful and influential textbook ever written. Being first set in type in Venice in 1482, it is one of the very earliest mathematical works to be printed after the invention of the printing press and was estimated by Carl Benjamin Boyer to be second only to the Bible in the number of editions published, with the number reaching well over one thousand. For centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the 20th century, by which time its content was universally taught through other school textbooks, did it cease to be considered something all educated people had read.
Basis in earlier work
Scholars believe that the Elements is largely a collection of theorems proven by other mathematicians, supplemented by some original work. Proclus, a Greek mathematician who lived several centuries after Euclid, wrote in his commentary of the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus' theorems, perfecting many of Theaetetus', and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors". Pythagoras was probably the source for most of books I and II, Hippocrates of Chios (not the better known Hippocrates of Kos) for book III, and Eudoxus for book V, while books IV, VI, XI, and XII probably came from other Pythagorean or Athenian mathematicians. Euclid often replaced fallacious proofs with his own, more rigorous versions. The use of definitions, and postulates or axioms dated back to Plato, almost a century earlier. The Elements may have been based on an earlier textbook by Hippocrates of Chios, who also may have originated the use of letters to refer to figures.
Transmission of the text
In the fourth century AD, Theon of Alexandria produced an edition of Euclid which was so widely used that it became the only surviving source until François Peyrard's 1808 discovery at the Vatican of a manuscript not derived from Theon's. This manuscript, the Heiberg manuscript, is from a Byzantine workshop c. 900 and is the basis of modern editions. Papyrus Oxyrhynchus 29 is a tiny fragment of an even older manuscript, but only contains the statement of one proposition.
Although known to, for instance, Cicero, there is no extant record of the text having been translated into Latin prior to Boethius in the fifth or sixth century. The Arabs received the Elements from the Byzantines in approximately 760; this version, by a pupil of Euclid called Proclo, was translated into Arabic under Harun al Rashid c. 800. The Byzantine scholar Arethas commissioned the copying of one of the extant Greek manuscripts of Euclid in the late ninth century. Although known in Byzantium, the Elements was lost to Western Europe until c. 1120, when the English monk Adelard of Bath translated it into Latin from an Arabic translation.
The first printed edition appeared in 1482 (based on Campanus of Novara's 1260 edition), and since then it has been translated into many languages and published in about a thousand different editions. Theon's Greek edition was recovered in 1533. In 1570, John Dee provided a widely respected "Mathematical Preface", along with copious notes and supplementary material, to the first English edition by Henry Billingsley.
Copies of the Greek text still exist, some of which can be found in the Vatican Library and the Bodleian Library in Oxford. The manuscripts available are of variable quality, and invariably incomplete. By careful analysis of the translations and originals, hypotheses have been made about the contents of the original text (copies of which are no longer available).
Ancient texts which refer to the Elements itself, and to other mathematical theories that were current at the time it was written, are also important in this process. Such analyses are conducted by J. L. Heiberg and Sir Thomas Little Heath in their editions of the text.
Also of importance are the scholia, or annotations to the text. These additions, which often distinguished themselves from the main text (depending on the manuscript), gradually accumulated over time as opinions varied upon what was worthy of explanation or further study.
The Elements is still considered a masterpiece in the application of logic to mathematics. In historical context, it has proven enormously influential in many areas of science. Scientists Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Sir Isaac Newton were all influenced by the Elements, and applied their knowledge of it to their work. Mathematicians and philosophers, such as Bertrand Russell, Alfred North Whitehead, and Baruch Spinoza, have attempted to create their own foundational "Elements" for their respective disciplines, by adopting the axiomatized deductive structures that Euclid's work introduced.
The austere beauty of Euclidean geometry has been seen by many in western culture as a glimpse of an otherworldly system of perfection and certainty. Abraham Lincoln kept a copy of Euclid in his saddlebag, and studied it late at night by lamplight; he related that he said to himself, "You never can make a lawyer if you do not understand what demonstrate means; and I left my situation in Springfield, went home to my father's house, and stayed there till I could give any proposition in the six books of Euclid at sight". Edna St. Vincent Millay wrote in her sonnet Euclid Alone Has Looked on Beauty Bare, "O blinding hour, O holy, terrible day, When first the shaft into his vision shone Of light anatomized!". Einstein recalled a copy of the Elements and a magnetic compass as two gifts that had a great influence on him as a boy, referring to the Euclid as the "holy little geometry book".
The success of the Elements is due primarily to its logical presentation of most of the mathematical knowledge available to Euclid. Much of the material is not original to him, although many of the proofs are his. However, Euclid's systematic development of his subject, from a small set of axioms to deep results, and the consistency of his approach throughout the Elements, encouraged its use as a textbook for about 2,000 years. The Elements still influences modern geometry books. Further, its logical axiomatic approach and rigorous proofs remain the cornerstone of mathematics.
Outline of Elements
Contents of the books
Books 1 through 4 deal with plane geometry:
- Book 1 contains Euclid's 10 axioms (5 named postulates—including the parallel postulate—and 5 named axioms) and the basic propositions of geometry: the pons asinorum (proposition 5), the Pythagorean theorem (Proposition 47), equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles are "equal" (have the same area).
- Book 2 is commonly called the "book of geometric algebra" because most of the propositions can be seen as geometric interpretations of algebraic identities, such as a(b + c + ...) = ab + ac + ... or (2a + b)2 + b2 = 2(a2 + (a + b)2). It also contains a method of finding the square root of a given number.
- Book 3 deals with circles and their properties: inscribed angles, tangents, the power of a point, Thales' theorem.
- Book 4 constructs the incircle and circumcircle of a triangle, and constructs regular polygons with 4, 5, 6, and 15 sides.
- Book 5 is a treatise on proportions of magnitudes. Proposition 25 has as a special case the inequality of arithmetic and geometric means.
- Book 6 applies proportions to geometry: Similar figures.
- Book 7 deals strictly with elementary number theory: divisibility, prime numbers, Euclid's algorithm for finding the greatest common divisor, least common multiple. Propositions 30 and 32 together are essentially equivalent to the fundamental theorem of arithmetic stating that every positive integer can be written as a product of primes in an essentially unique way, though Euclid would have had trouble stating it in this modern form as he did not use the product of more than 3 numbers.
- Book 8 deals with proportions in number theory and geometric sequences.
- Book 9 applies the results of the preceding two books and gives the infinitude of prime numbers (proposition 20), the sum of a geometric series (proposition 35), and the construction of even perfect numbers (proposition 36).
- Book 10 attempts to classify incommensurable (in modern language, irrational) magnitudes by using the method of exhaustion, a precursor to integration.
Books 11 through to 13 deal with spatial geometry:
- Book 11 generalizes the results of Books 1–6 to space: perpendicularity, parallelism, volumes of parallelepipeds.
- Book 12 studies volumes of cones, pyramids, and cylinders in detail, and shows for example that the volume of a cone is a third of the volume of the corresponding cylinder. It concludes by showing the volume of a sphere is proportional to the cube of its radius by approximating it by a union of many pyramids.
- Book 13 constructs the five regular Platonic solids inscribed in a sphere, calculates the ratio of their edges to the radius of the sphere, and proves that there are no further regular solids.
Euclid's method and style of presentation
As was common in ancient mathematical texts, when a proposition needed proof in several different cases, Euclid often proved only one of them (often the most difficult), leaving the others to the reader. Later editors such as Theon often interpolated their own proofs of these cases.
Euclid's presentation was limited by the mathematical ideas and notations in common currency in his era, and this causes the treatment to seem awkward to the modern reader in some places. For example, there was no notion of an angle greater than two right angles, the number 1 was sometimes treated separately from other positive integers, and as multiplication was treated geometrically he did not use the product of more than 3 different numbers. The geometrical treatment of number theory may have been because the alternative would have been the extremely awkward Alexandrian system of numerals.
The presentation of each result is given in a stylized form, which, although not invented by Euclid, is recognized as typically classical. It has six different parts: First is the enunciation which states the result in general terms (i.e. the statement of the proposition). Then the setting-out, which gives the figure and denotes particular geometrical objects by letters. Next comes the definition or specification which restates the enunciation in terms of the particular figure. Then the construction or machinery follows. It is here that the original figure is extended to forward the proof. Then, the proof itself follows. Finally, the conclusion connects the proof to the enunciation by stating the specific conclusions drawn in the proof, in the general terms of the enunciation.
No indication is given of the method of reasoning that led to the result, although the Data does provide instruction about how to approach the types of problems encountered in the first four books of the Elements. Some scholars have tried to find fault in Euclid's use of figures in his proofs, accusing him of writing proofs that depended on the specific figures drawn rather than the general underlying logic, especially concerning Proposition II of Book I. However, Euclid's original proof of this proposition is general, valid, and does not depend on the figure used as an example to illustrate one given configuration.
Euclid's list of axioms in the Elements was not exhaustive, but represented the principles that were the most important. His proofs often invoke axiomatic notions which were not originally presented in his list of axioms. Later editors have interpolated Euclid's implicit axiomatic assumptions in the list of formal axioms.
For example, in the first construction of Book 1, Euclid used a premise that was neither postulated nor proved: that two circles with centers at the distance of their radius will intersect in two points. Later, in the fourth construction, he used superposition (moving the triangles on top of each other) to prove that if two sides and their angles are equal then they are congruent; during these considerations he uses some properties of superposition, but these properties are not described explicitly in the treatise. If superposition is to be considered a valid method of geometric proof, all of geometry would be full of such proofs. For example, propositions I.1 – I.3 can be proved trivially by using superposition.
Mathematician and historian W. W. Rouse Ball put the criticisms in perspective, remarking that "the fact that for two thousand years [the Elements] was the usual text-book on the subject raises a strong presumption that it is not unsuitable for that purpose."
It was not uncommon in ancient time to attribute to celebrated authors works that were not written by them. It is by these means that the apocryphal books XIV and XV of the Elements were sometimes included in the collection. The spurious Book XIV was probably written by Hypsicles on the basis of a treatise by Apollonius. The book continues Euclid's comparison of regular solids inscribed in spheres, with the chief result being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being
The spurious Book XV was probably written, at least in part, by Isidore of Miletus. This book covers topics such as counting the number of edges and solid angles in the regular solids, and finding the measure of dihedral angles of faces that meet at an edge.
- 1460s, Regiomontanus (incomplete)
- 1482, Erhard Ratdolt (Venice), first printed edition
- 1533, editio princeps by Simon Grynäus
- 1557, by Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (only propositions, no full proofs, includes original Greek and the Latin translation)
- 1572, Commandinus Latin edition
- 1574, Christoph Clavius
- 1505, Bartolomeo Zamberti (Latin)
- 1543, Niccolò Tartaglia (Italian)
- 1557, Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (Greek to Latin)
- 1558, Johann Scheubel (German)
- 1562, Jacob Kündig (German)
- 1562, Wilhelm Holtzmann (German)
- 1564–1566, Pierre Forcadel de Béziers (French)
- 1570, Henry Billingsley (English)
- 1575, Commandinus (Italian)
- 1576, Rodrigo de Zamorano (Spanish)
- 1594, Typografia Medicea (edition of the Arabic translation of Nasir al-Din al-Tusi)
- 1604, Jean Errard de Bar-le-Duc (French)
- 1606, Jan Pieterszoon Dou (Dutch)
- 1607, Matteo Ricci, Xu Guangqi (Chinese)
- 1613, Pietro Cataldi (Italian)
- 1615, Denis Henrion (French)
- 1617, Frans van Schooten (Dutch)
- 1637, L. Carduchi (Spanish)
- 1639, Pierre Hérigone (French)
- 1651, Heinrich Hoffmann (German)
- 1651, Thomas Rudd (English)
- 1660, Isaac Barrow (English)
- 1661, John Leeke and Geo. Serle (English)
- 1663, Domenico Magni (Italian from Latin)
- 1672, Claude François Milliet Dechales (French)
- 1680, Vitale Giordano (Italian)
- 1685, William Halifax (English)
- 1689, Jacob Knesa (Spanish)
- 1690, Vincenzo Viviani (Italian)
- 1694, Ant. Ernst Burkh v. Pirckenstein (German)
- 1695, C. J. Vooght (Dutch)
- 1697, Samuel Reyher (German)
- 1702, Hendrik Coets (Dutch)
- 1705, Edmund Scarburgh (English)
- 1708, John Keill (English)
- 1714, Chr. Schessler (German)
- 1714, W. Whiston (English)
- 1720s Jagannatha Samrat (Sanskrit, based on the Arabic translation of Nasir al-Din al-Tusi)
- 1731, Guido Grandi (abbreviation to Italian)
- 1738, Ivan Satarov (Russian from French)
- 1744, Mårten Strömer (Swedish)
- 1749, Dechales (Italian)
- 1745, Ernest Gottlieb Ziegenbalg (Danish)
- 1752, Leonardo Ximenes (Italian)
- 1756, Robert Simson (English)
- 1763, Pubo Steenstra (Dutch)
- 1768, Angelo Brunelli (Portuguese)
- 1773, 1781, J. F. Lorenz (German)
- 1780, Baruch Schick of Shklov (Hebrew)
- 1781, 1788 James Williamson (English)
- 1781, William Austin (English)
- 1789, Pr. Suvoroff nad Yos. Nikitin (Russian from Greek)
- 1795, John Playfair (English)
- 1803, H.C. Linderup (Danish)
- 1804, F. Peyrard (French)
- 1807, Józef Czech (Polish based on Greek, Latin and English editions)
- 1807, J. K. F. Hauff (German)
- 1818, Vincenzo Flauti (Italian)
- 1820, Benjamin of Lesbos (Modern Greek)
- 1826, George Phillips (English)
- 1828, Joh. Josh and Ign. Hoffmann (German)
- 1828, Dionysius Lardner (English)
- 1833, E. S. Unger (German)
- 1833, Thomas Perronet Thompson (English)
- 1836, H. Falk (Swedish)
- 1844, 1845, 1859 P. R. Bråkenhjelm (Swedish)
- 1850, F. A. A. Lundgren (Swedish)
- 1850, H. A. Witt and M. E. Areskong (Swedish)
- 1862, Isaac Todhunter (English)
- 1865, Sámuel Brassai (Hungarian)
- 1873, Masakuni Yamada (Japanese)
- 1880, Vachtchenko-Zakhartchenko (Russian)
- 1901, Max Simon (German)
- 1908, Thomas Little Heath (English)
- 1939, R. Catesby Taliaferro (English)
Currently in print
- Euclid's Elements – All thirteen books in one volume, Based on Heath's translation, Green Lion Press ISBN 1-888009-18-7.
- The Elements: Books I-XIII-Complete and Unabridged, (2006) Translated by Sir Thomas Heath, Barnes & Noble ISBN 0-7607-6312-7.
- The Thirteen Books of Euclid's Elements, translation and commentaries by Heath, Thomas L. (1956) in three volumes. Dover Publications. ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3)
- Heath (1956) (vol. 1), p. 372
- Heath (1956) (vol. 1), p. 409
- Boyer (1991). "Euclid of Alexandria". p. 101. "With the exception of the Sphere of Autolycus, surviving work by Euclid are the oldest Greek mathematical treatises extant; yet of what Euclid wrote more than half has been lost,"
- Heath (1956) (vol. 1), p. 114
- Encyclopedia of Ancient Greece (2006) by Nigel Guy Wilson, page 278. Published by Routledge Taylor and Francis Group. Quote:"Euclid's Elements subsequently became the basis of all mathematical education, not only in the Romand and Byzantine periods, but right down to the mid-20th century, and it could be argued that it is the most successful textbook ever written."
- Boyer (1991). "Euclid of Alexandria". p. 100. "As teachers at the school he called a band of leading scholars, among whom was the author of the most fabulously successful mathematics textbook ever written – the Elements (Stoichia) of Euclid."
- Boyer (1991). "Euclid of Alexandria". p. 119. "The Elements of Euclid not only was the earliest major Greek mathematical work to come down to us, but also the most influential textbook of all times. [...]The first printed versions of the Elements appeared at Venice in 1482, one of the very earliest of mathematical books to be set in type; it has been estimated that since then at least a thousand editions have been published. Perhaps no book other than the Bible can boast so many editions, and certainly no mathematical work has had an influence comparable with that of Euclid's Elements."
- The Historical Roots of Elementary Mathematics by Lucas Nicolaas Hendrik Bunt, Phillip S. Jones, Jack D. Bedient (1988), page 142. Dover publications. Quote:"the Elements became known to Western Europe via the Arabs and the Moors. There the Elements became the foundation of mathematical education. More than 1000 editions of the Elements are known. In all probability it is, next to the Bible, the most widely spread book in the civilization of the Western world."
- Russell, Bertrand. A History of Western Philosophy. p. 212.
- W.W. Rouse Ball, A Short Account of the History of Mathematics, 4th ed., 1908, p. 54
- Daniel Shanks (2002). Solved and Unsolved Problems in Number Theory. American Mathematical Society.
- Ball, p. 43
- Ball, p. 38
- The Earliest Surviving Manuscript Closest to Euclid's Original Text (Circa 850); an image of one page
- L.D. Reynolds and Nigel G. Wilson, Scribes and Scholars 2nd. ed. (Oxford, 1974) p. 57
- One older work claims Adelard disguised himself as a Muslim student in order to obtain a copy in Muslim Córdoba (Rouse Ball, p. 165). However, more recent biographical work has turned up no clear documentation that Adelard ever went to Muslim-ruled Spain, although he spent time in Norman-ruled Sicily and Crusader-ruled Antioch, both of which had Arabic-speaking populations. Charles Burnett, Adelard of Bath: Conversations with his Nephew (Cambridge, 1999); Charles Burnett, Adelard of Bath (University of London, 1987).
- Busard, H.L.L. (2005). "Introduction to the Text". Campanus of Novara and Euclid's Elements I. Stuttgart: Franz Steiner Verlag. ISBN 978-3-515-08645-5
- Henry Ketcham, The Life of Abraham Lincoln, at Project Gutenberg, http://www.gutenberg.org/ebooks/6811
- Dudley Herschbach, "Einstein as a Student," Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA, page 3, web: HarvardChem-Einstein-PDF: about Max Talmud visited on Thursdays for six years.
- Ball, p. 55
- Ball, pp. 58, 127
- Heath (1963), p. 216
- Ball, p. 54
- Godfried Toussaint, "A new look at Euclid's second proposition," The Mathematical Intelligencer, Vol. 15, No. 3, 1993, pp. 12–23.
- Heath (1956) (vol. 1), p. 62
- Heath (1956) (vol. 1), p. 242
- Heath (1956) (vol. 1), p. 249
- Ball (1960) p. 55.
- Boyer (1991). "Euclid of Alexandria". pp. 118–119. "In ancient times it was not uncommon to attribute to a celebrated author works that were not by him; thus, some versions of Euclid's Elements include a fourteenth and even a fifteenth book, both shown by later scholars to be apocryphal. The so-called Book XIV continues Euclid's comparison of the regular solids inscribed in a sphere, the chief results being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being that of the edge of the cube to the edge of the icosahedron, that is, . It is thought that this book may have been composed by Hypsicles on the basis of a treatise (now lost) by Apollonius comparing the dodecahedron and icosahedron. [...] The spurious Book XV, which is inferior, is thought to have been (at least in part) the work of Isidore of Miletus (fl. ca. A.D. 532), architect of the cathedral of Holy Wisdom (Hagia Sophia) at Constantinople. This book also deals with the regular solids, counting the number of edges and solid angles in the solids, and finding the measures of the dihedral angles of faces meeting at an edge."
- Alexanderson & Greenwalt 2012, pg. 163
- K. V. Sarma (1997), in Helaine Selin, Encyclopaedia of the history of science, technology, and medicine in non-western cultures, Springer, pp. 460–461, ISBN 978-0-7923-4066-9
- JNUL Digitized Book Repository
- Alexanderson, Gerald L.; Greenwalt, William S. (2012), "About the cover: Billingsley's Euclid in English", Bulletin (New Series) of the American Mathematical Society 49 (1): 163–167
- Ball, W.W. Rouse (1960). A Short Account of the History of Mathematics (4th ed. [Reprint. Original publication: London: Macmillan & Co., 1908] ed.). New York: Dover Publications. pp. 50–62. ISBN 0-486-20630-0.
- Heath, Thomas L. (1956). The Thirteen Books of Euclid's Elements (3 vols. ) (2nd ed. [Facsimile. Original publication: Cambridge University Press, 1925] ed.). New York: Dover Publications. ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3) Check
|isbn=value (help). Heath's authoritative translation plus extensive historical research and detailed commentary throughout the text.
- Heath, Thomas L. (1963). A Manual of Greek Mathematics. Dover Publications. ISBN 978-0-486-43231-1.
- Boyer, Carl B. (1991). A History of Mathematics (Second Edition ed.). John Wiley & Sons, Inc. ISBN 0-471-54397-7.
|Wikisource has original text related to this article:|
|Wikimedia Commons has media related to: Elements of Euclid|
- Euclid (David E. Joyce, ed. 1997) [c. 300 BC]. Elements. Retrieved 2006-08-30. In HTML with Java-based interactive figures.
- Euclid's Elements in English and Greek (PDF), utexas.edu
- Richard Fitzpatrick a bilingual edition (typset in PDF format, with the original Greek and an English translation on facing pages; free in PDF form, available in print) ISBN 978-0-615-17984-1
- Heath's English translation (HTML, without the figures, public domain) (accessed February 4, 2010)
- Euclid's Elements in ancient Greek (typeset in PDF format, public domain; available in print—free download)
- Oliver Byrne's 1847 edition (also hosted at archive.org)– an unusual version by Oliver Byrne (mathematician) who used color rather than labels such as ABC (scanned page images, public domain)
- The First Six Books of the Elements by John Casey and Euclid scanned by Project Gutenberg.
- Reading Euclid – a course in how to read Euclid in the original Greek, with English translations and commentaries (HTML with figures)
- Sir Thomas More's manuscript
- Latin translation by Aethelhard of Bath
- Euclid Elements – The original Greek text Greek HTML
- Clay Mathematics Institute Historical Archive – The thirteen books of Euclid's Elements copied by Stephen the Clerk for Arethas of Patras, in Constantinople in 888 AD
- Kitāb Taḥrīr uṣūl li-Ūqlīdis Arabic translation of the thirteen books of Euclid's Elements by Nasīr al-Dīn al-Ṭūsī. Published by Medici Oriental Press(also, Typographia Medicea). Facsimile hosted by Islamic Heritage Project.
- Euclid's "Elements" Redux, an open textbook based on the "Elements" | http://en.wikipedia.org/wiki/Euclid's_Elements | 13 |
15 | How to Think Like a Computer Scientist: Learning with Python 2nd Edition/Iteration
Multiple assignment
As you may have discovered, it is legal to make more than one assignment to the same variable. A new assignment makes an existing variable refer to a new value (and stop referring to the old value).
The output of this program is 5 7, because the first time bruce is printed, his value is 5, and the second time, his value is 7. The comma at the end of the first print statement suppresses the newline after the output, which is why both outputs appear on the same line.
Here is what multiple assignment looks like in a state diagram:
Multiple assignment With multiple assignment it is especially important to distinguish between an assignment operation and a statement of equality. Because Python uses the equal sign (=) for assignment, it is tempting to interpret a statement like a = b as a statement of equality. It is not!
First, equality is symmetric and assignment is not. For example, in mathematics, if a = 7 then 7 = a. But in Python, the statement a = 7 is legal and 7 = a is not.
Furthermore, in mathematics, a statement of equality is always true. If a = b now, then a will always equal b. In Python, an assignment statement can make two variables equal, but they don't have to stay that way:
The third line changes the value of a but does not change the value of b, so they are no longer equal. (In some programming languages, a different symbol is used for assignment, such as <- or :=, to avoid confusion.)
Updating variables
One of the most common forms of multiple assignment is an update, where the new value of the variable depends on the old.
This means get the current value of x, add one, and then update x with the new value.
If you try to update a variable that doesn't exist, you get an error, because Python evaluates the expression on the right side of the assignment operator before it assigns the resulting value to the name on the left:
Before you can update a variable, you have to initialize it, usually with a simple assignment:
Updating a variable by adding 1 is called an increment; subtracting 1 is called a decrement.
The while statement
Computers are often used to automate repetitive tasks. Repeating identical or similar tasks without making errors is something that computers do well and people do poorly.
Repeated execution of a set of statements is called iteration. Because iteration is so common, Python provides several language features to make it easier. The first feature we are going to look at is the while statement.
Here is a function called countdown that demonstrates the use of the while statement:
You can almost read the while statement as if it were English. It means, While n is greater than 0, continue displaying the value of n and then reducing the value of n by 1. When you get to 0, display the word Blastoff!
More formally, here is the flow of execution for a while statement:
- Evaluate the condition, yielding False or True.
- If the condition is false, exit the while statement and continue execution at the next statement.
- If the condition is true, execute each of the statements in the body and then go back to step 1.
The body consists of all of the statements below the header with the same indentation.
This type of flow is called a loop because the third step loops back around to the top. Notice that if the condition is false the first time through the loop, the statements inside the loop are never executed.
The body of the loop should change the value of one or more variables so that eventually the condition becomes false and the loop terminates. Otherwise the loop will repeat forever, which is called an infinite loop. An endless source of amusement for computer scientists is the observation that the directions on shampoo, Lather, rinse, repeat, are an infinite loop.
In the case of countdown, we can prove that the loop terminates because we know that the value of n is finite, and we can see that the value of n gets smaller each time through the loop, so eventually we have to get to 0. In other cases, it is not so easy to tell. Look at the following function, definied for all postitive integers n:
The condition for this loop is n != 1, so the loop will continue until n is 1, which will make the condition false.
Each time through the loop, the program outputs the value of n and then checks whether it is even or odd. If it is even, the value of n is divided by 2. If it is odd, the value is replaced by n * 3 + 1. For example, if the starting value (the argument passed to sequence) is 3, the resulting sequence is 3, 10, 5, 16, 8, 4, 2, 1.
Since n sometimes increases and sometimes decreases, there is no obvious proof that n will ever reach 1, or that the program terminates. For some particular values of n, we can prove termination. For example, if the starting value is a power of two, then the value of n will be even each time through the loop until it reaches 1. The previous example ends with such a sequence, starting with 16.
Particular values aside, the interesting question is whether we can prove that this program terminates for all values of n. So far, no one has been able to prove it or disprove it!
Tracing a program
To write effective computer programs a programmer needs to develop the ability to trace the execution of a computer program. Tracing involves becoming the computer and following the flow of execution through a sample program run, recording the state of all variables and any output the program generates after each instruction is executed.
To understand this process, let's trace the call to sequence(3) from the previous section. At the start of the trace, we have a local variable, n (the parameter), with an initial value of 3. Since 3 is not equal to 1, the while loop body is executed. 3 is printed and 3 % 2 == 0 is evaluated. Since it evaluates to False, the else branch is executed and 3 * 3 + 1 is evaluated and assigned to n.
To keep track of all this as you hand trace a program, make a column heading on a piece of paper for each variable created as the program runs and another one for output. Our trace so far would look something like this:
n output -- ------ 3 3 10
Since 10 != 1 evaluates to True, the loop body is again executed, and 10 is printed. 10 % 2 == 0 is true, so the if branch is executed and n becomes 5. By the end of the trace we have:
n output -- ------ 3 3 10 10 5 5 16 16 8 8 4 4 2 2 1
Tracing can be a bit tedious and error prone (that's why we get computers to do this stuff in the first place!), but it is an essential skill for a programmer to have. From this trace we can learn a lot about the way our code works. We can observe that as soon as n becomes a power of 2, for example, the program will require log2(n) executions of the loop body to complete. We can also see that the final 1 will not be printed as output.
6.5 Counting digits
The following function counts the number of decimal digits in a positive integer expressed in decimal format:
def num_digits(n): count = 0 while n != 0: count = count + 1 n = n / 10 return count
A call to num_digits(710) will return 3. Trace the execution of this function call to convince yourself that it works.
This function demonstrates another pattern of computation called a counter. The variable count is initialized to 0 and then incremented each time the loop body is executed. When the loop exits, count contains the result --the total number of times the loop body was executed, which is the same as the number of digits.
If we wanted to only count digits that are either 0 or 5, adding a conditional before incrementing the counter will do the trick:
def num_zero_and_five_digits(n): count = 0 while n != 0: digit = n % 10 if digit == 0 or digit == 5: count = count + 1 n = n / 10 return count
Confirm that num_zero_and_five_digits(1055030250) returns 7.
>>> 1055030250 % 10 0 #count = 1 >>> 1055030250/10 105503025 >>> 105503025 % 10 5 #count = 2 >>> 105503025/10 10550302 >>> 10550302 % 10 2 #count = 2 (no change) >>> 10550302/10 1055030 >>> 1055030 % 10 0 #count = 3 >>> 1055030/10 105503 >>> 105503 % 10 3 #count = 3 (no change) >>> 105503/10 10550 >>> 10550 % 10 0 #count = 4 >>> 10550/10 1055 >>> 1055 % 10 5 #count = 5 >>> 1055/10 105 >>> 105 % 10 5 #count = 6 >>> 105/10 10 >>> 10 % 10 0 #count = 7 >>> 10/10 1 >>> 1 % 10 1 #count = 7 (no change) >>> 1/10 0 #n = 0 so loop exits
Abbreviated assignment
Incrementing a variable is so common that Python provides an abbreviated syntax for it:
count += 1 is an abreviation for count = count + 1 . The increment value does not have to be 1:
There are also abbreviations for -=, *=, /=, and %=:
6.7. Tables
One of the things loops are good for is generating tabular data. Before computers were readily available, people had to calculate logarithms, sines and cosines, and other mathematical functions by hand. To make that easier, mathematics books contained long tables listing the values of these functions. Creating the tables was slow and boring, and they tended to be full of errors.
When computers appeared on the scene, one of the initial reactions was, This is great! We can use the computers to generate the tables, so there will be no errors. That turned out to be true (mostly) but shortsighted. Soon thereafter, computers and calculators were so pervasive that the tables became obsolete.
Well, almost. For some operations, computers use tables of values to get an approximate answer and then perform computations to improve the approximation. In some cases, there have been errors in the underlying tables, most famously in the table the Intel Pentium used to perform floating-point division.
Although a log table is not as useful as it once was, it still makes a good example of iteration. The following program outputs a sequence of values in the left column and 2 raised to the power of that value in the right column:
#program to make tabular data #x = 1 #prints x (which is 1), then tab, then 2^x (which is 2) #x += 1, adds 1 to the value of x #the program runs until the value of x is less than 13, which is 12 x = 1 while x < 13: print x, '\t', 2**x x += 1
The string '\t' represents a tab character. The backslash character in '\t' indicates the beginning of an escape sequence. Escape sequences are used to represent invisible characters like tabs and newlines. The sequence \n represents a newline.
An escape sequence can appear anywhere in a string; in this example, the tab escape sequence is the only thing in the string. How do you think you represent a backslash in a string?
As characters and strings are displayed on the screen, an invisible marker called the cursor keeps track of where the next character will go. After a print statement, the cursor normally goes to the beginning of the next line.
The tab character shifts the cursor to the right until it reaches one of the tab stops. Tabs are useful for making columns of text line up, as in the output of the previous program:
1 2 2 4 3 8 4 16 5 32 6 64 7 128 8 256 9 512 10 1024 11 2048 12 4096
Because of the tab characters between the columns, the position of the second column does not depend on the number of digits in the first column.
6.8. Two-dimensional tables
A two-dimensional table is a table where you read the value at the intersection of a row and a column. A multiplication table is a good example. Let's say you want to print a multiplication table for the values from 1 to 6.
A good way to start is to write a loop that prints the multiples of 2, all on one line:
#i = 1 #prints 2 * i (which is 2), then space #i += 1, adds 1 to the value of i making i = 2 #the program runs until the value of i is equal to or less than 6, #which is i = 6 #prints 2 * i2 (which is 4).... #last coma on the print line suppresses the newline #after loop = complete; second print statement starts newline i = 1 while i <= 6: print 2 * i, ' ', i += 1 print
The first line initializes a variable named i, which acts as a counter or loop variable. As the loop executes, the value of i increases from 1 to 6. When i is 7, the loop terminates. Each time through the loop, it displays the value of 2 * i, followed by three spaces.
Again, the comma in the print statement suppresses the newline. After the loop completes, the second print statement starts a new line.
The output of the program is:
2 4 6 8 10 12
So far, so good. The next step is to encapsulate and generalize.
6.9 Encapsulation and generalization
Encapsulation is the process of wrapping a piece of code in a function, allowing you to take advantage of all the things functions are good for. You have already seen two examples of encapsulation: print_parity in chapter 4; and is_divisible in chapter 5.
Generalization means taking something specific, such as printing the multiples of 2, and making it more general, such as printing the multiples of any integer.
This function encapsulates the previous loop and generalizes it to print multiples of n:
#i = 1 #say n = 2 #prints n * i (which is 2), then tab #i += 1, adds 1 to the value of i making i = 2 #the program runs until the value of i is equal to or less than 6, #which is i = 6 #prints 2 * i2 (which is 4).... def print_multiples(n): i = 1 while i <= 6: print n * i, '\t', i += 1 print
To encapsulate, all we had to do was add the first line, which declares the name of the function and the parameter list. To generalize, all we had to do was replace the value 2 with the parameter n.
If we call this function with the argument 2, we get the same output as before. With the argument 3, the output is:
3 6 9 12 15 18
With the argument 4, the output is:
4 8 12 16 20 24
By now you can probably guess how to print a multiplication table --- by calling print_multiples repeatedly with different arguments. In fact, we can use another loop:
#see above code for description - code included in new program to make it run def print_multiples(n): i = 1 while i <=6: print n * i, '\t', i += 1 print #here the value of i goes into above function as n #and therefore n = 1 #then n * i (1 * 1), '\t' i = 1 while i <= 6: print_multiples(i) i += 1
Notice how similar this loop is to the one inside print_multiples. All we did was replace the print statement with a function call.
The output of this program is a multiplication table:
1 2 3 4 5 6 2 4 6 8 10 12 3 6 9 12 15 18 4 8 12 16 20 24 5 10 15 20 25 30 6 12 18 24 30 36
More encapsulation
To demonstrate encapsulation again, let's take the code from the last section and wrap it up in a function:
This process is a common development plan. We develop code by writing lines of code outside any function, or typing them in to the interpreter. When we get the code working, we extract it and wrap it up in a function.
This development plan is particularly useful if you don't know how to divide the program into functions when you start writing. This approach lets you design as you go along.
Local variables
You might be wondering how we can use the same variable, i, in both print_multiples and print_mult_table. Doesn't it cause problems when one of the functions changes the value of the variable?
The answer is no, because the i in print_multiples and the i in print_mult_table are not the same variable.
Variables created inside a function definition are local; you can't access a local variable from outside its home function. That means you are free to have multiple variables with the same name as long as they are not in the same function.
The stack diagram for this program shows that the two variables named i are not the same variable. They can refer to different values, and changing one does not affect the other.
Stack 2 diagram The value of i in print_mult_table goes from 1 to 6. In the diagram it happens to be 3. The next time through the loop it will be 4. Each time through the loop, print_mult_table calls print_multiples with the current value of i as an argument. That value gets assigned to the parameter n.
Inside print_multiples, the value of i goes from 1 to 6. In the diagram, it happens to be 2. Changing this variable has no effect on the value of i in print_mult_table.
It is common and perfectly legal to have different local variables with the same name. In particular, names like i and j are used frequently as loop variables. If you avoid using them in one function just because you used them somewhere else, you will probably make the program harder to read.
More generalization
As another example of generalization, imagine you wanted a program that would print a multiplication table of any size, not just the six-by-six table. You could add a parameter to print_mult_table:
We replaced the value 6 with the parameter high. If we call print_mult_table with the argument 7, it displays:
1 2 3 4 5 6 2 4 6 8 10 12 3 6 9 12 15 18 4 8 12 16 20 24 5 10 15 20 25 30 6 12 18 24 30 36 7 14 21 28 35 42
This is fine, except that we probably want the table to be square --- with the same number of rows and columns. To do that, we add another parameter to print_multiples to specify how many columns the table should have.
Just to be annoying, we call this parameter high, demonstrating that different functions can have parameters with the same name (just like local variables). Here's the whole program:
Notice that when we added a new parameter, we had to change the first line of the function (the function heading), and we also had to change the place where the function is called in print_mult_table.
As expected, this program generates a square seven-by-seven table:
1 2 3 4 5 6 7 2 4 6 8 10 12 14 3 6 9 12 15 18 21 4 8 12 16 20 24 28 5 10 15 20 25 30 35 6 12 18 24 30 36 42 7 14 21 28 35 42 49
When you generalize a function appropriately, you often get a program with capabilities you didn't plan. For example, you might notice that, because ab = ba, all the entries in the table appear twice. You could save ink by printing only half the table. To do that, you only have to change one line of print_mult_table. Change
and you get:
1 2 4 3 6 9 4 8 12 16 5 10 15 20 25 6 12 18 24 30 36 7 14 21 28 35 42 49
A few times now, we have mentioned all the things functions are good for. By now, you might be wondering what exactly those things are. Here are some of them:
- Giving a name to a sequence of statements makes your program easier to read and debug.
- Dividing a long program into functions allows you to separate parts of the program, debug them in isolation, and then compose them into a whole.
- Functions facilitate the use of iteration.
- Well-designed functions are often useful for many programs. Once you write and debug one, you can reuse it.
Newton's Method
Loops are often used in programs that compute numerical results by starting with an approximate answer and iteratively improving it.
For example, one way of computing square roots is Newton's method. Suppose that you want to know the square root of n. If you start with almost any approximation, you can compute a better approximation with the following formula:
By repeatedly applying this formula until the better approximation is equal to the previous one, we can write a function for computing the square root:
Try calling this function with 25 as an argument to confirm that it returns 5.0.
Newton's method is an example of an algorithm: it is a mechanical process for solving a category of problems (in this case, computing square roots).
It is not easy to define an algorithm. It might help to start with something that is not an algorithm. When you learned to multiply single-digit numbers, you probably memorized the multiplication table. In effect, you memorized 100 specific solutions. That kind of knowledge is not algorithmic.
But if you were lazy, you probably cheated by learning a few tricks. For example, to find the product of n and 9, you can write n - 1 as the first digit and 10 - n as the second digit. This trick is a general solution for multiplying any single-digit number by 9. That's an algorithm!
Similarly, the techniques you learned for addition with carrying, subtraction with borrowing, and long division are all algorithms. One of the characteristics of algorithms is that they do not require any intelligence to carry out. They are mechanical processes in which each step follows from the last according to a simple set of rules.
In our opinion, it is embarrassing that humans spend so much time in school learning to execute algorithms that, quite literally, require no intelligence.
On the other hand, the process of designing algorithms is interesting, intellectually challenging, and a central part of what we call programming.
Some of the things that people do naturally, without difficulty or conscious thought, are the hardest to express algorithmically. Understanding natural language is a good example. We all do it, but so far no one has been able to explain how we do it, at least not in the form of an algorithm.
Write a single string that:
produces this output.Â
- Add a print statement to the sqrt function defined in section 6.14 that prints out better each time it is calculated. Call your modified function with 25 as an argument and record the results.
- Trace the execution of the last version of print_mult_table and figure out how it works.
Write a function print_triangular_numbers(n) that prints out the first n triangular numbers. A call to print_triangular_numbers(5) would produce the following output:
1 1 2 3 3 6 4 10 5 15(hint: use a web search to find out what a triangular number is.)
Open a file named ch06.py and add the following:Write a function, is_prime, which takes a single integral argument and returns True when the argument is a prime number and False otherwise. Add doctests to your function as you develop it.
What will num_digits(0) return? Modify it to return 1 for this case. Why does a call to num_digits(-24) result in an infinite loop (hint: -1/10 evaluates to -1)? Modify num_digits so that it works correctly with any integer value. Add the following to the ch06.py file you created in the previous exercise:Add your function body to num_digits and confirm that it passes the doctests.
Add the following to the ch06.py:Write a body for num_even_digits so that it works as expected.
Add the following to ch06.py:Write a body for print_digits so that it passes the given doctests.
Write a function sum_of_squares_of_digits that computes the sum of the squares of the digits of an integer passed to it. For example, sum_of_squares_of_digits(987) should return 194, since 9**2 + 8**2 + 7**2 == 81 + 64 + 49 == 194.
Check your solution against the doctests above. | http://en.wikibooks.org/wiki/How_to_Think_Like_a_Computer_Scientist:_Learning_with_Python_2nd_Edition/Iteration | 13 |
15 | An Outline Sketch of the Origin and History of Constellations and Star-Names by Gary D. Thompson
Copyright © 2007-2013 by Gary D. Thompson
Return To Site Contents Page
An Outline Sketch of the Origin and History of Constellations and Star-Names
(1) The Nature of Constellations
The origin of constellations (star groupings) is one of the most discussed themes in the history of Western astronomy. Many questions concerning the origin of the constellations are likely to remain unanswered. It is possible that astronomy originated when early cultures began not only time-keeping practices but also began the practice of grouping individual stars into constellations. The establishment of constellations/asterisms was perhaps the earliest prelude to the origin of quantitative astronomy.
Constellations are named patterns of stars derived from the random placement of stars visible in the night sky. The word constellation means a "set of stars." (An asterism is any grouping of stars, whether a constellation or not. The well-known "Big Dipper" is an asterism, not a constellation.) Constellations are arbitrary subjective/imagined flat groupings of stars (perceived as figures or patterns) among the stars visible in the sky. However, they are not always groupings of essentially random dots. Some groupings/patterns are 'objectively' suggested by the apparent placement of brighter stars in the sky. (Mostly, the 48 ancient Greek constellations single out only the bright patterns.) The three most obvious groupings of stars in the northern sky are (1) the Dipper, (2) Orion, and (3) the Pleiades. Outside of these stars constellations (star groupings)/constellation figures are not obvious. Gravitationally, the stars comprising constellation figures have nothing to do with each other. They are not groups of stars actually clustered together. They appear contiguous only because we view them in two dimensions. (Constellations, in reality, are 3 dimensional. The stars forming them are not at the same distance from the earth. The stars grouped in a constellation lie roughly in the same direction in space but are at greatly different distances from the sun.) The relative positions of the stars appear to remain fixed over time. The apparent patterns formed by the stars appear to be fixed. Constellations are a natural means of dividing the sky into (arbitrary) areas. Within the same latitude different peoples see the same stars in the sky but discern different star patterns. Their interpretation of the sky also differs. The ancient Egyptians believed the sky was populated by gods/goddesses in the form of (large) constellations. The ancient Greeks, however, did not share this belief. Peoples in equatorial latitudes (Inca, Peru) and southern latitudes (Australian aborigines) discerned constellations not only in groups of stars but also in the prominent light and dark areas of the Milky Way. (In northern latitudes several "dark constellations" were also identified" "The Great rift" of the summer Milky Way, and "The Coalsack" in the Crux.) Exactly where and when the practice of creating constellations first occurred is not known but may have been as early as the Neolithic Period (or earlier).
Approximately 3000 stars are visible to a night sky observer. Readily apparent star patterns liable to grouping as constellations by any culture include (1) the Pleiades, (2) the Hyades, (3) the Big Dipper (in Ursa Major), (4) Orion, (5) Orion's Belt, (6) the Northern Cross (in Cygnus), (7) Cassiopeia, (8) Castor and Pollux, and (9) the Southern Cross. The 3 most obvious groupings of stars in the northern sky are (1) the Pleiades, (2) the Big Dipper, and (3) Orion. Another distinctive/obvious grouping of stars is the W shape of stars forming the circumpolar constellation of Cassiopeia. Constellations are recognised by their pattern and also by individual (bright) stars comprising the pattern. Different cultures used the same stars to create different constellations. In the Aratean scheme of constellations (ancient Greece) the 7 stars of the Big Dipper formed the core of the Great Bear constellation. In ancient China the Big Dipper was known as the "Bureaucrat's Cart."
Regarding the zodiac; a common misconception is to term the signs as "constellations." The 12 signs of the zodiac are not the same as the 12 constellations comprising the zodiac, or any of 88 constellations used in observational astronomy. The constellations are by definition a pattern of stars, and their sizes differ greatly. The 12 signs of the zodiac, on the other hand, are purely geometrical constructs.
(2) Methods for Investigating Constellation Origins
Theories concerning the origin of the constellations remain largely speculative. Analytical tools and methods able to be applied to the problem of the origin of the constellations ranked in order of approximate reliability and importance are: (1) Historical (extant astronomical texts), (2) Philological (analysis of constellation names), (3) Anthropological (anthropological analogy regarding the purpose of constellations), (4) Archaeological (iconography), (5) Statistical (statistical analysis of information and items), (6) Mythological (constellation myths), and (7) Precessional (past constellation positions).
(3) Early (Earliest) Constellations
The grouping of stars into asterisms or constellations is likely to be of great antiquity. The ethnographic evidence indicates that the big dipper asterism that presently forms part of our modern Ursa Major constellation was anciently identified as a bear constellation throughout many parts of the world. It is commonly held that the existence of certain parallels between Siberian/Asian star lore and North America star lore relating to the big dipper asterism establishes a pre-Columbian origin for the latter and also an Ice-Age antiquity for such. Proponents maintain that the big dipper bear constellation entered the American continent with a wave of immigrants circa 14,000 years ago. However, the part-time ethnologist Stansbury Hagar remarked in his 1900 article ("The Celestial Bear." (Journal of American Folk-Lore, Volume 13, Number 49, Apr.-Jun., Pages 92-103)) on the Native American bear constellation: "When we seek legends connected with the Bear, we find that in spite of the widespread knowledge of the name there is by no means a wealth of material."
Cultures world-wide appropriated 'standard' elements of the night sky. The three most obvious groupings of stars in the northern sky are (1) the Dipper, (2) Orion, and (3) the Pleiades. (Readily apparent star patterns liable to grouping as constellations by any culture include (1) the Pleiades, (2) the Hyades, (3) the Big Dipper (in Ursa Major), (4) Orion, (5) Orion's Belt, (6) the Northern Cross (in Cygnus), (7) Cassiopeia, (8) Castor and Pollux, and (9) the Southern Cross (southern hemisphere). Also, the Milky Way. Additional the Pole (or northern polar region of the celestial equator).)
A simple model of constellation development is not indicated as satisfactory. Attempting to decide in terms of universal diffusion (monogenesis) or independent invention (polygenesis) overlooks the evidence for both being at work to varying degrees and circumstances. There is also evidence of constellations and constellation sets being developed in considerable isolation. Four major constellation sets in the ancient world (and the classical period) involved Mesopotamia (Near East), Egypt (Mediterranean), India (Near East), China (Orient), and Greece (Mediterranean). The constellation set of Northern Europe was established late. (The geographic/cultural blocks, Orient, Near East, Mediterranean, and Northern Europe can be useful when considering the transmission of astronomical knowledge in the ancient world.) The Mesopotamian constellation patterns/set was - to a degree - influential on Greece and also India. However, it was not really influential in Egypt and China. Diffusion and independent invention were both at work with constellation usage and constellation myths (specific conceptualisations). Stars and constellations can be used in 1 or more ways, including time keeping, seasonal indicators, direction finding, and social metaphors/constellation lore. Because these functions are distinctive, conspicuous, and useful, they have an independent cultural value.
It is very probable that the Avestan Titar (Titrya) (Sirius) corresponds to the Vedic Tisya (Tishya). The Vedic Tisya appears as a vaguely astralised archer. In the Rig Veda the god Tisya is the celestial archer. Bernhard Forssman has proposed an entymological explanation showing it is most likely that the Vedic Tisya corresponds to the Avestan Titrya, and that Sirius has a direct and clear relationship with the three stars of Orions belt. In several mythological passages in Vedic literature the three stars comprising the asterism of Orions belt were represented as an arrow shot by Tisya. In the Avestan Yast 8.6-7 and 37-38 Titrya flies in the sky as the arrow shot by their Aryan hero archer. (The connection lies not in the Rig Veda but in later Indian literature.) Also, a Babylonian text dealing with the new year rituals states that the star KAK-SI-SÁ [KAK.SI.DI] (Sirius) measures the waters. This compares with the Iranian Titrya raising the waters of Vourukaša. (The month of Sirius (Tīrī) was associated with the rainy season.)
The similarity occurring in in Babylonia, China and Egypt with the ancient constellation figures to the southeast of Sirius is quite remarkable.
The Chinese have a Bow and Arrow asterism Hou-Chi (reputedly dating to at least the 4th-century BCE) formed by the same stars (in Canis Major) as the Mespotamian Bow and Arrow constellations. The Bow and Arrow is aimed at the Jackal (T'ien-Lang) which is the star Sirius. The celestial Emperor (i.e., mythical ancient Emperors) shot an arrow at the sky jackal (Sirius). In later Egypt, on the round zodiac of Denderah the Egyptian divine archeress, Satit (Satet) (one of two wives of Khnumu), situated just to the east of the Cow in the Barque (Sirius), shoots her arrow at Sirius. Mesopotamian uranography (late period) had constellations comprising of Bow and Arrow (mul BAN and mul KAK.SI.DI). (Also written as mul KAK.SI.DÁ.) Sirius is KAK.SI.DI the Arrow Star (specifically the (shining) tip of the arrow). The Bow is formed from the stars of Argo and Canis Major. Presumably one or more stars between Sirius and mul BAN marked the shaft of the arrow. The MUL.APIN text states "the Bow Star is the Ishtar of Elam, daughter of Enlil." (The Mesopotamian mul KAK-SI-DI (Sirius) is always identified as an arrow.) The planet Mercury was also called "Arrow." This is likely because both Sirius and Mercury move across the sky at a rapid speed. Sirius was associated with Ninurta (god of the thunderstorms, the plough, and the south wind, and god of the city of Nippur), and Mercury with Nabû (god of wisdom and writing). The Mesopotamian Bow and Arrow constellations are identifiable as the original source for the Iranian, Indian, Chinese, and Egyptian Bow and Arrow schemes. In cuneiform texts Venus as morning star is sometimes called the Bow Star (due to Ishtar of Agade being a war goddess). The issue was first discussed by Franz Kugler in his SSB II. According to Antonio Panaino Mesopotamian astral beliefs regarding mul KAK.SI.DI began influencing Iranian beliefs about Titrya in the Achaemenid Period. Iranian beliefs about Titrya became syncretistic and Titrya also became a god related to (1) the calendar, (2) the astral interpretation of the feasts of the Adonis-Tammuz fertility cycle, and (3) astrological speculation.
(4) Early Western Constellation Sets
The splendour of the starry night sky must have been a constant source of fascination since the dawn of human history. However, this does not mean there were early attempts to constellate the night sky and give names to prominent stars. Ancient constellation 'systems' did not fill the entire visible sky. Ed Krupp has pointed out that constellation systems are functions of social complexity. Nomadic hunters and herders don't actually develop full constellation systems but select key elements of the sky that are useful. The appearance of elaborate constellation sets as reference systems covering most of the visible sky only originated with the development of complex societies. Constellations are a means of organising the sky by dividing it into smaller segments. This function of constellations was certainly required by the Mesopotamians as their astral sciences developed.
Knowledge of early uranography exists as (1) a qualitative descriptive tradition (and the constellations are associated with myths), and later (2) a quantitative mathematical tradition (i.e., described stars are located in a co-ordinate system). Within the early qualitative descriptive tradition the locations of stars were described in terms of their relative positions within a constellation figure.
There is some evidence for the existence of constellations in the late 3rd millennium BCE in Sumeria (Ur III Period) and also in the Middle East in the city-states of Ebla and Mari. In his article "Further Notes on Birmingham Cuneiform Tablets volume I." (Acta Sumerologica, Volume 13, 1991, Pages 406-417) the Assyriologist Wayne Horowitz includes a brief discussion of possible evidence pointing to an Ur III origin of at least some constellation and star names.
Complex constellation systems make their earliest appearances in the 2nd millennium BCE in the stable kingships of Mesopotamia, Egypt, and China. In these empires astronomy had become a state supported and state directed enterprise. The extant evidence clearly indicates the significant role of Mesopotamian civilization in the origin of the constellations. (Likely dating to the neo-Sumerian period in the late 3rd millennium BCE.) The Babylonian constellation set of the 1st-millennium BCE influenced the the constellation set consolidated by the Greeks. (The consolidation of the major astral omen series Enuma Anu Enlil between circa 1300 BCE and 1000 BCE was likely an influence for the constellating of the entire Mesopotamian sky.) The Greek constellation set forms the core of the (European) constellation set we use today. The complete constellating of the Greek sky was also done rather rapidly; likely between the 5th and the 3rd centuries BCE.
Since the work of the Belgian astronomer Eugène Delporte on constellation boundaries (published 1930) the constellations are, in modern astronomy, no longer regarded as star patterns but rather as precisely defined areas of the sky instead.
(5) Early Star Catalogues
Modern star catalogues give the position of the stars in a mathematical system of coordinates. Ancient 'star catalogs,' such as Ptolemy's 'star catalogue' were quite different and were not well-suited for astronomical observation. Their intention was more towards providing basic information on the stars comprising the constellation figures and the relative positions of the constellations, as well as explaining legendary aspects of the constellation figures.
The term "early star catalogues" is also commonly applied to descriptions of Greek (and Babylonian) uranography prior to Ptolemy. With few exceptions these "early star catalogues", however, are distinctly different from what modern astronomers, from Ptolemy onwards, have meant by the term. With few exceptions, prior to Ptolemy star catalogues did not give the position of stars by any system of mathematical coordinates. They are instead qualitative descriptions of the constellations. They simply note the number of stars in each part of a constellation and the general location of the brighter stars within a constellation. (The type of description usually used is "near X is Y".) This cumbersome method of describing the location of stars in terms of their relative positions in a constellation was used by both the Babylonians and the Greeks. The pictorial arrangement of stars is not a star catalogue. A star catalogue proper gives accurate positions for each individual star regardless of the constellation it is grouped into. Also, the boundaries of the Greek constellations were subject to change up to the time of Ptolemy.
In Greek astronomy the stars within the constellation figures were usually not given individual names. (There are only a few individual star names from Greece. The most prominent stars in the sky were usually nameless in Greek civilization. If there was a system of Greek star names then it has not come down to us and also would appear unknown to Ptolemy.) Greek constellations ("star catalogues") up to the time of Ptolemy are descriptive. The Western tradition of describing the constellations by means of describing the relative positions of the stars within the constellation figures was firmly established by Eratosthenes and Hipparchus. In their descriptions to the time of Ptolemy the constellations were defined by the Greeks by their juxtaposition (i.e., descriptive comparison of positional relationship to each other). Prior to Hipparchus (and Ptolemy) the general goal of the Greeks at least was not accurate astronomical observation but artistic and mythological education. The end result was a sort of geographical description of territorial position and limits.
Circa the 5th-century BCE many of the constellations recognised by the Greeks had become associated with myths. Both the star catalogue (constellation description) of Eudoxus (4-century BCE) and the star catalogue (constellation description) of Aratus (3rd-century BCE) adopted the vocabulary of myth. In his Castasterismi Eratosthenes (284-204 BCE) completed and standardised this process with each of the constellations being given a mythological significance. The first complete description of the Greek constellations to survive is given by the Greek poet Aratus circa 270 BCE. With only a few exceptions no actual stars are described by Aratus - only constellation figures. This method was undoubtedly inherited from Eudoxus who produced a set of descriptions of constellations in which the relative positions of stars in each of the constellations was described. Eudoxus was likely the first Greek to summarise the Greek system of constellations. The purpose of the Phaenomena by Aratus was to describe the appearance and the organisation of the constellations in the sky with reference to each other.
Ancient 'star catalogues involved:
(1) Qualitative descriptions of the constellations.
(2) Noting the number of stars in each part of the constellations. (Describing the constellations by means of describing the relative positions of the stars within the constellation figures.)
(3) Noting the general location of the brighter stars within the constellations.
(4) Perhaps providing illustrations of the constellations to accompany the descriptions given.
(5) The illustrations depicting the mythological figures represented by the constellations (rather than the location and brightness of individual stars).
(6) The illustrations depicting the mythological figures represented by the constellations sometimes containing no stars at all - the body of each of the figures simply being filled with text discussing the particular constellation.
(6) Chronological History of Preserved Early Star Maps
Circa 1500 BCE (late 2nd-millennium BCE) - Several royal tombs in Egypt have ceiling/wall paintings of constellation figures. In the New Kingdom period (circa 1500 to 1100 BCE), the constellational representations were painted on temple ceilings (i.e., the Ramesseum ceiling) and on the sepulchral vaults of kings (i.e., the tomb of Senmut). However these are not accurately drawn and are essentially decorative.
Circa 650 BCE - The Assyrian planisphere K 8538 is a circular star map, divided into equal 8 sectors, with constellations depicted in addition to written constellation names, star names, and symbols. It is not a depiction of the whole visible sky.
Circa 300-100 BCE - Kugel celestial globe may be the earliest celestial globe to survive from antiquity. It does not follow the Graeco-Roman astronomical norms of the period as defined by the astronomer Hipparchus. The size and positions of a number of constellations are misplaced.
Circa 150 BCE - Farnese globe depicting most of the Aratean constellation figures (but not individual stars). Believed by art historians to be a Roman copy of an earlier (presumably) Greek original. It is generally thought that the existing sculpture was made in Rome circa 150 CE and is a late copy of a Greek original made circa 200 BCE. Stars may have been painted on the marble globe.
Circa 140 CE - Ptolemy's descriptive star catalogue with the placement of stars within Eudoxan/Aratean constellation figures. The constellation list in Ptolemy's star catalogue standardised the Western constellation scheme.
Circa 150-220 CE - The Mainz celestial globe is a complete celestial globe in that it depicts all 48 Classical constellations (but does not fully agree with the star-catalogue of Claudius Ptolemy) with relative precision.
Circa 670 CE - Chinese Dunhuang star map depicting the whole of the sky visible in China. The oldest known manuscript star chart and, excluding astrolabes, it is the oldest existing portable star map known.
Circa 715 CE - The Aratean constellation set painted on the domed ceiling at the bath house of the Arab palace at Qusayr 'Amra (Jordan). The constellation depiction/mapping mostly followed the Ptolemaic tradition.
Circa 820 CE - The Leiden Aratea is a 9th-century CE copy of an astronomical and meteorological manuscript based on the Phaenomena written by the Greek poet Aratus The evidence suggests the Leiden Aratea was probably produced in the royal scriptorium, possibly in 816 CE.) The manuscript contains 39 full-page miniatures.
Circa 1009/10 - Al-Sufi's book on the fixed stars. In al-Sufi's Kitab suwar al-kawakib the constellation figures and the individual stars comprising them are shown separately (i.e., separated from each other) without any information on their relative positions being given. No sky map (with all the constellations charted) appears in the book.
1440 CE - Earliest known western maps of the northern and southern hemispheres with both stars and constellation figures. (These are preserved in Vienna and may have been based on the now lost charts from 1425 owned by Regiomontanus.)
1598 CE - Southern constellations depicted on a celestial globe by Petrus Plancius. (Until the end of the 16th-century star charts contained only the 48 Ptolemaic constellations.)
(7) Individual Star Names
The majority of star names adopted for use in Western nomenclature since the Renaissance are Arabic in origin. The use of names to identify individual stars that formed a constellation - in contrast to the descriptor method identifying the location of the star in the constellation figure - was only really established through the influence of Arabic-Islamic astronomy on Latin Europe during the medieval period. Star names still officially in use are essentially limited to the old pre-telescopic names given to the brighter stars. The more numerous fainter stars, most requiring the use of a telescope to see, are known only by modern catalogue numbers and coordinates. The greatest influence on stars names occurred when Ptolemy's book The Great System of Astronomy (Arabic, Almagest) was translated twice into Arabic (initially twice in the 9th-century). With the reintroduction of Ptolemy's Almagest back into Europe, beginning in the 10th-century CE (its major influence being in the 13th-century), many of the Arabic-language star descriptions using Ptolemy's star catalogue came to be used widely in Europe as names for stars. Quite a lot of prominent stars bear Arabic names, in which the definite article al (corresponding to the English-language 'the') usually appears in front of the names, e.g., Algol, 'The Ghoul.' The inclusion of the definite article as part of the star name (prefix) has now become rather arbitrary.
A number of modern star names derive from the indigenous pre-Islamic traditions of the Arabian Peninsula, where names had been established for the brighter stars. However, the majority of star names remaining in modern use are those star names used by the medieval Arab-Islamic astronomer al-Sūfī, and are Arabic translations of Ptolemy's system of descriptions. Ptolemy's system of descriptions was inevitably perpetuated with the reintroduction of Ptolemy's Almagest back into Europe. Following the ancient Greek tradition, the majority of stars names are related to their constellation, e.g., the star name Deneb means 'tail' and is the label for the matching part of Cygnus the Swan; the star name Fomalhaut comes from the Arabic meaning 'mouth of the southern fish,' which matches where Ptolemy had described it in his star catalogue in the Almagest. Other star names (not many) simply describe the star itself, such as Sirius, which literally means 'scorching.'
The leading expert on star names in Arab-Islamic astronomy is the German historian Paul Kunitzsch. His research has enabled him to identify 2 traditions of star names in Arab-Islamic tradition. The 1st involves the traditional star names originated by the indigenous pre-Islamic inhabitants of the Arabian Peninsula, which he has named 'indigenous-Arabic,' the 2nd involves the scientific Arab-Islamic tradition, which he designates 'scientific-Arabic.'
(8) The Magnitude System
The magnitude system - denoting the degree of brightness of stars - originated with the Greek astronomer Hipparchus. Circa 130 BCE, the Greek astronomer Hipparchus of Rhodes created the first known catalogue of stars (that totaled approximately 850 stars). This star catalogue does not survive today. Hipparchus listed the stars that could be seen in each constellation, described their positions, and ranked their brightness in a simple way on a scale of 1 to 6, the brightest being 1. He called the 20 brightest stars "of the first magnitude," simply meaning "the biggest." Stars that were not as bright were called "of the second magnitude," or second biggest. The faintest stars Hipparchus could barely see he called "of the sixth magnitude." Thus the system of star magnitudes is one that counts backwards (an inverse scale). The use of this backward numbering method (or rather a similar system) for describing the brightness of a star survives today largely due to its adoption by the influential Hellenised astronomer Claudius Ptolemy. Around 140 CE Claudius Ptolemy copied Hipparchus’ magnitude system in his own expanded star catalogue of 1022 stars (included in his work the Almagest). Sometimes Ptolemy added the words "greater" or "smaller" to distinguish between stars within a magnitude class. Because Ptolemy's Almagest remained the basic astronomy text for the next 1,400 years, everyone used the system of first to sixth magnitudes. Prior to the invention of the telescope the “apparent magnitude” system worked quite well.
However, by the middle of the 19th-century, a need to define the entire magnitude scale more precisely than by simple eyeball judgment. As more accurate instruments came into play, astronomers found that each magnitude is about 2.5 times brighter than the next greater magnitude. This means that magnitude 1 stars are around 100 times brighter than magnitude 6 stars. Also, more accurate measurements allowed the astronomers to assign stars decimal values, like 2.75, rather than rounding off to magnitude 2 or 3. In 1856 the Oxford astronomer Norman Pogson proposed that a difference of five magnitudes be exactly defined as a brightness ratio of 100 to 1. This convenient rule was quickly adopted. One magnitude thus corresponds to a brightness difference of exactly the fifth root of 100, or very close to 2.512 - a value known as the Pogson ratio. The resulting magnitude scale is logarithmic (in agreement with the mistaken 1850s belief that all human senses are logarithmic in their response to stimuli). However, our perceptions of the world actually follow power-law curves, not logarithmic ones. Thus a star of magnitude 3.0 does not in fact look exactly halfway in brightness between 2.0 and 4.0. It looks a little fainter than that. The star that looks halfway between 2.0 and 4.0 will actually be about magnitude 2.8. The wider the magnitude gap, the greater this discrepancy is. Although scientists have known for some time that the response of our eyes to stimuli/intensity is a power law, astronomers continue to use the Pogson magnitude scale.
The logarithmic system is now locked into the magnitude system as firmly as Hipparchus's backward numbering. A result of ranking star magnitudes on a precise mathematical scale, however ill-fitting, introduced the unavoidable situation that some "1st-magnitude" stars were a whole lot brighter than others. This required astronomers to extend the scale out to brighter values as well as faint ones (made visible with the invention of the telescope). Thus the bright stars Rigel, Capella, Arcturus, and Vega are magnitude 0, an awkward value statement that makes it seem they have no brightness at all! The magnitude scale extends farther into negative numbers: Sirius shines at magnitude –1.5 (minus 1.5).
Usually, when an astronomer talks about magnitude, "apparent magnitude," is meant - referring to the way we perceive stars, viewing them from Earth. Apparent magnitude is usually written with a lower case m, as in 3.24m. However, the brightness of a star is not just a matter of how brightly it shines, but also how far away it is. Modern astronomers came up with another way to measure brightness and call this "absolute magnitude." Absolute magnitude is defined as how bright a star would appear if it were exactly 10 parsecs (about 33 light years) away from Earth. For example, the Sun has an apparent magnitude of -26.7 (because it's very, very close) and an absolute magnitude of +4.8. Absolute magnitudes are usually written with a capital M, as in 2.75M.
(9) Uses for Stars
Constellations serve as a mnemonic aid for identifying stars and their positions (including relative positions) in the night sky. Establishing artificial grouping relationships among the more prominent of the approximately 3500 visible stars made it easy for early people to both remember them and to locate them quickly in a segment of the night sky. Uses for stars and their arrangement into constellations/asterisms - and their yearly cycle - include: (1) determining new year, (2) festival regulation, (3) direction finding (nautical navigation and land navigation), (4) weather indicators, (5) seasonal indicators, (6) agricultural calendars, (7) weather prediction, (8) time-keeping (time of night and time of year), and (9) identifying sky positions. (See: "Some Aspects of Primitive astronomy." by A. P. FitzGerald (The Irish Astronomical Journal, Volume 1, Number 7, September, Pages 197-212).)
The quite well preserved Babylonian cuneiform tablet, BM 36609 contains an important compendium (a compilation of short texts) of Babylonian stellar astronomy dealing with the use of stars in late Babylonian astronomy.
In the Greek city-states constellations were introduced to help identify and remember the rising and setting of individual stars.
Until the introduction of the Julian calendar reform of 46 BCE the only reliable way of telling the time (and seasons) was to utilise celestial chronology. People needed to know the positions of the constellations and be aware of the meteorological phenomena accompanying them. For the Classical World all essential knowledge to enable this was included in Aratus' Phaenomena.
(10) Constellation Development
We have material evidence that in the occident small was just as likely to be early and big was just as likely to be late. There are sound reasons to believe the constellations originated with, and developed from, seasonally/agriculturally significant stars and simple asterisms. Also, there no solid reasons to believe that constellation development – including the size of a constellation and the amount of sky considered necessary to map – followed a simple developmental path. Diverse cultural responses to the night sky, made over lengthy time periods, were inevitable. There is every indication that ancient cultures invented constellation patterns that matched the functions/purposes attributed to the star groupings/arrangements they devised. Hydra is the largest of the modern constellations and measures 1303 square degrees. The Greeks identified 27 stars comprising Hydra. Argo was the largest constellation in Greek uranography measuring 1867 square degrees. The Greeks identified 27 stars comprising Argo. The evidence is clear that the constellation Argo, the largest constellation in the Greek sky, was the late invention of the Hellenistic period. It appears to have been invented by the Greeks under the influence of the story of the Argonauts and their voyage for the Golden Fleece. The Pleiades asterism (one of the smallest `constellations´) measuring some 6 square degrees, has existed and been used all around the world as a seasonal/agricultural indicator at a date long preceding the existence of Argo. (Argo had no seasonal/agricultural function.) It is impossible to have a late origin for the Pleiades.
(11) Constellation Transmission (Diffusion and Migration)
Enabled in a multitude of ways - both BCE and CE.
Four geographic/cultural blocks can be considered: (1) Orient (China, Mongolia, Korea, Southeast Asia), (2) Near East (Mesopotamia, Arabian Peninsula, Iran, India), (3) Mediterranean (Greece, Rome, Turkey, Levant, Egypt), and (4) Northern Europe. Empire establishment through warfare: (1) Egyptian empire, (2) Assyrian empire, (3) Persian empire, (4) Greek empire, (5) Roman empire, (6) Mongolian empire, (7) Islamic empire, and (8) the Crusades. Important activities include: (1) Migration, (2) Trade (Routes), (3) Travelling scholars/entertainers (storytellers)/crafts people. The Phoenicians (seafaring traders) were conduits for cultural exchange. The trade route termed the Silk Road offered access to traders, merchants, pilgrims and travellers, etc, between China and the Near East and Mediterranean. Key religions and related activities: (1) Christianity, (2) Islam, and (3) Buddhism. Buddhism was very much a missionary religion. Buddhist missionaries travelled from India to China.
The Near East (Mesopotamia) was an early source of constellations and cultural transmission.
Later means included European exploration, migration, colonisation and empire building. Also, the Byzantine empire. Constantinople, the capital of the Byzantine empire, like all great capitals, was a melting-pot of heterogeneous elements: all seventy-two tongues known to man were represented in it, according to a contemporary source.
Methodological issues need to be clarified. The inventory method of comparison contributes little. Also, comparisons need to be culturally specific. The contact model of transmission needs to be explored; as does bilingualism within cultures.
(2) Dating the Constellations
(1) "Void Zone" Method
In 1807 the ideas of the Swedish amateur astronomer Carl Swartz his ideas on the origins of the constellations were first published in his Recherches sur l'origine et le signification des Constellations de la Sph�re greque (1807). The revised (standard) edition was published as Le Zodiaque expliqu� (1809). Unlike Charles Dupuis and others he identified a recent date for the constellations and the zodiac. In the first and second editions of his book Carl Swartz proposed that the unconstellated area of the southern sky gave an approximate date for the formation of the constellations. Specifically he: (1) identified the unmapped space in the southern sky as significant for determining the origin of the constellations; (2) argued a case for the essential unity of the constellations as a single set; (3) estimated that the radius of the "void zone" was about 40 degrees; and (4) deducted from the "void zone" that the date of origin of the constellations was 1400 BCE.
Carl Swartz's ideas for the origin of science and culture in the Caucasus region were based on Von den kaukcasischen V�lkern der mythischen by Theodor Ditmar (1789). The Royal Observatory, Greenwich sun spot specialist Edward Maunder discovered a copy of Le Zodiaque expliqu� in the observatory library and in a series of articles from 1898 to 1913 reintroduced many of the ideas of its author.
The "void zone" argument, though popular since its reintroduction by Edward Maunder, has multiple problems. The chief premise of the "void zone" is that the classical Greek constellations (i.e., the Aratean constellations) were designed at one definite time and in one place, according to a preconceived plan. The argument for establishing the time and place of the Aratean constellations is based on the extent of the vacant space left around the south pole of the celestial sphere when all but the Aratean constellations are removed; and the apparent movement of the stars due to precession. The further assumption made is that the area of the globe that was not constellated in the description of Aratus was centred on the south celestial pole at the date when the constellations were fixed.
The size of the "void zone" is taken as a clue to the latitude at which the constellation inventors lived. A date is found when, by allowing for precession, the centre of the "void zone" on the globe is in the position of the south celestial pole.
(2) "Void Zone" Method Flaws
The subjectivity of the method is demonstrated by the varying estimates of the radius of the "void zone" (30 degrees to 40 degrees) and the varying estimates of the date of origin given by precession (1400-2800 BCE). Anyway the boundaries of the "void zone" cannot be accurately defined as we lack the understanding of the original boundaries of the classical Greek constellation figures. Due to our lack of knowledge of the boundaries of the Aratean constellations the "void zone" method is inherently subjective and its use can lead to no real agreement (as it has failed to do) regarding the latitude and date for the constellations being designed at one definite time and place.
Many of the Aratean constellations show a similarity with Babylonian constellations. The Greek constellation scheme of Aratus of Soli (3rd-century BCE) contains a mix of both Babylonian constellations and non-Babylonian constellations. The Babylonian component of the Aratean constellations is traceable to both Babylonian "star calendar" constellations of the 2nd millennium BCE and also to Babylonian constellations listed in the later Mul.Apin series (circa 1000 BCE). (The few known 8th-century BCE constellations of Homer mirror the constellations already existing in the Babylonian scheme.) The Babylonian scheme of constellations has always been a mix of constellations mentioned by Aratus and other constellations outside the Aratean scheme. A definite Babylonian influence on the later Greek scheme of constellations is reasonably indicated. It is obvious that the Greeks borrowed certain constellations from the Babylonians and it is obvious that the constellations could not have originated, or been adopted, as a single devised scheme by either the Babylonians or the Greeks.
If the constellations originated as a set circa 2000-2800, as commonly claimed by the proponents of the "void zone" method, then they cannot have originated with the Greeks. However, the latitude at which the constellations were believed to have originated as a single scheme cannot refer to Mesopotamia because their earliest scheme of constellations, though dating to the 2nd millennium BCE, was a mix of constellations mentioned by Aratus and other constellations outside his scheme.
Crediting the Minoans, as some like to do, as the makers of the classical constellations and offering explanations based on the destruction of Minoan civilization and the later ineptitude of the Greeks as observers are also not convincing. There is no evidence that the classical Greek scheme of constellations existed anywhere prior to its evolvement in Greece circa 500 BCE. This includes the fact that there is no evidence that the particular Greek scheme of 12 zodiacal constellations existed anywhere prior to its evolvement in Greece circa 500 BCE. The difficulty with maintaining an ancient zodiac is how can a late Mesopotamian zodiac (developed circa 500 BCE) and comprised of 12 constellations (and 12 equal divisions), and substantially borrowed by the Greeks, have been in use by anybody hundreds of years earlier. (Or even thousands of years earlier, prior to the existence of the Babylonian civilization which demonstrably created it.)
The flawed "void zone" argument has become a common tool for maintaining that a Neolithic zodiac (and fully constellated sky) can be reasonably be proposed. The "void zone" argument can hardly substitute for the lack of clear evidence (which tends to fall under the murky heading of "tradition"). Even if the "void zone" argument were correct it has never offered support for the idea that the constellations could have existed as a deliberately planned set extending back some 6000-8000 years BCE (or further). The use of the "void zone" argument controls the feasible range for the dating of the constellations if they are considered to have originated as a deliberately planned scheme. Interestingly, Edward Maunder, a committed proponent of the "void zone" argument, in his later articles on the topic attempted to overcome this limitation by implying a very slow developmental period for the final scheme of constellation design (see: "Origin of the Constellations", The Observatory, Volume 36, 1913, Page 330).
(1) The Controversy Over Possible Paleolithic Astronomy
It is now usual to argue for the existence of Paleolithic lunar calendars as a means to establish the existence of astronomy in the Paleolithic Period. The strongest recent (but not original) proponent of the existence of Paleolithic lunar calendars was the ex-journalist Alexander Marshack (See: The Roots of Civilization (1972)). Others persons have since moved further and assert the establishment of constellations in the Paleolithic Period. Arguments for the origin of constellations in the Palaeolithic Period remain very controversial.
(2) The Controversy Over Possible Paleolithic Lunar Calendars
The original proponent of the existence of Paleolithic lunar calendars was the British geologist Professor Thomas Rupert Jones (1811-1911). He edited the results of the collaborative work of the French paleontologist Professor Édouard Lartet (1801-1871) and the British ethnologist Henry Christy (1810-1865). On Christy's death his half-finished book Reliquiae Aquilanicae was further partly completed by Lartet. On Lartet's death the book was finally edited and completed by Thomas Rupert Jones (known by the preferred name Rupert Jones). It was initially issued in parts but published complete in 1875. Rupert Jones saw the markings (notches) on the bones and antlers from the Upper Paleolithic, on Plate LXXV of the book, not as art (simple decorative marks) but as arithmetical notations, tallies, or calendars.
Alexander Marshack's theory of the existence of Paleolithic lunar calendars remains controversial and numerous objections have been raised against such. Early on, many anthropologists objected to the methods employed by Marshack's application of the schematic notational apparatus he devised could extract a lunar cycle from almost any set of markings. As quick examples: Microscopic analysis of some of the same artifacts by other scholars (including Francesco d'Errico) yields different counts of marks, experimental replication of such artifacts suggests other reasons for the distinctions among marks, and there is reason to believe that all the marks were made at one time, rather than being a sort of tally system of time. (See: Megaliths, Myths and Men by Peter Brown (1976); "Paleolithic Lunar Calendars: A Case of Wishful Thinking?" by Francesco D'Errico (Current Anthropology, Volume 30, Number 1, 1989, Pages 117-118) and see also his reply to Alexander Marshack in Current Anthropology, Volume 30, Number 4, 1989, Pages 494-500; "Upper Paleolithic Notation Systems in Prehistoric Europe." by Simon Holdaway and Susan Johnston (Expedition, Volume 31, Number 1, 1989, Pages 3-11); "On the Impossibility of Close Reading: The Case of Alexander Marshack." by John Elkins (Current Anthropology, Volume 37, Number 2, 1996, Pages 185-201) and see also responses and replies on Pages 201-226; "Review of The Roots of Civilization." by Iain Davidson (American Anthropologist, New Series, Volume 95, Number 4, December, 1993, Pages 1027—1028); "Marking Time." by Daniel Rosenberg (Cabinet Magazine, Issue 28: Bones, Winter, 2007-2008. See also critical reviews of Alexander Marshack's publications by Andrée Rosenfeld (Antiquity, Volume XLV, 1971, Pages 317-319); and by Arden King (American Anthropologist, Volume 75, Number 6, 1973, Pages 1897-1900).)
Francesco d’Errico made the point that Alexander Marshack’s classification was based on Marshack’s own intuition and in reporting his results, Marshack had manipulated the number of marks and sequences in order to achieve an accumulation correlated to the motion of the sun or moon.
It was after reading an article by Jean de Heinzelin (Jean de Heinzelin, "Ishango," Scientific American, Volume 206, June, 1962, Pages 109—110).) that Marshack (1918-2004) was prompted to begin to systematically comparing similarly marked bones. He eventually concluded that a very wide range of examples, including the Lartet bone and the Blanchard bone, adhered to a lunar pattern. Marshack did not actually claim that series of marks on certain bones were intended to be lunar calendar. He did, however, believe they were tallies related to lunar cycles. Marshack speculated that the notches could be read as examples of "lunar phrasing." Marshack never publicly released details of all the objects he thought were relation to marking lunar cycles - only the ones he thought provided the best proof of his ideas.
A stable lunar calendar is not easily enabled and is a complicated undertaking. The principal arguments against accepting Marshack's lunar calendar interpretation of scribed marks on certain bones relates to the issues of (1) where he decides a particular sequence of marks begins, and (2) how he decides to count the marks. The critical investigation by Francesco D'Errico of Alexander Marshack's claim that lunar calendars were kept by the Azilian culture of France circa 12,000 years ago has not been supportive of the claim. The important experimental work carried out by Francesco D'Errico found that marks that appeared to be made over time by different tools on items that were claimed to be lunar calendars were, in all likelihood, made by the same tool and without time gaps. Further, no example has ever been given by Marshack, or his supporters, of any scheme of correct lunar month counts on any of the notched bones claimed to be lunar calendars. "Marshack's lunar-notation months vary from 27 to 33 days; the first and last quarters vary from 5 to 8 days and periods of Full Moon and New Moon from 1 to 4 days - plus an allowance of ± 1 day for errors in observation . From these very flexible parameters the lunar model used by Marshack can be made significant for any number or sequence of numbers between 1 and 16 and between 26 and 34. The difficulty in accepting Marshack's ideas is that for each example he has studied, each seems to require assumptions to be made about 'cloud-outs' or it requires other adjustments to account for inconsistencies. With good reason critics have claimed that his ideas are too glib and allow too much manoeuvering or arbitrary jiggling of numbers to suit circumstances. (Megaliths, Myths and Men by Peter Brown, (1976), Page 29.)" Also, the moon itself is never depicted on any of the bones claimed to comprise lunar calendars; but animals and other symbols sometimes are.
(3) The Controversy Over the 7-Day Week as Lunar
The concept of the 7-day week seems to have originated in Mesopotamia/West Asia. The 7-day week has no astronomical significance. There is no 7-day cycle in any astronomical or other natural phenomena. Relating the 7-day week to four phases of the moon is not obvious. Hence the concept of a 7-day lunar week is different to the issue of a lunar month. (The 7-day week is not actually a particularly good system for dividing the lunar month as it simply does not divide evenly into the actual duration of such. For this reason very few ancient peoples used a 7-day scheme.) The lunar month can at least be tracked by observing the cycles of the moon. According to some authorities the Babylonians had divided the year into 7-day weeks at least as early as the 15th-century BCE. However, the Sumerian epic of Atra-Hassis (Story of the Flood), preserved in Akkadian from the Old Babylonian period, has the earliest reference to what could be a 7-day week: "After the storm had swept over the country for seven days and seven nights." More likely it is common number symbolism. Simply talking of 7 days and 7 nights does not make a 7-day week. In Mesopotamia the number 7 was the most commonly revered number. One non-lunar theory is simply the 7-day week originated as a planetary week based on the seven identified celestial bodies Sun, Moon, Mercury, Venus, Mars, Jupiter, and Saturn. Some persons still assert the Babylonians named the seven days of the week after these seven celestial bodies that they knew well. Also, the Babylonian sacred number seven was probably related to the seven "planets." There is no indication that we can easily extend back to the Paleolithic Period for the number 7 or a 7-day week.
(4) Possible Identification of Paleolithic Constellations
Astronomical interpretations are given to a number of the Lascaux (France) cave paintings. Some researchers believe that the #18 Lascaux auroch with the two associated sets of dots represents the constellation Taurus. According to Frank Edge a group of 6 dots painted above the shoulder of auroch #18 represents the Pleiades open star cluster, and that another group of V-shaped dots painted on the auroch's face represents the Hyades open star cluster.
(5) Current Status of The Controversy Over Possible Paleolithic Astronomy
To date none of the arguments attempting to show the existence of some sort of Palaeolithic astronomy can be considered convincing. Well-worth consulting is the hefty book Eine Himmelskarte aus der Eiszeit? (1999) by Dr Michael Rappenglück. The author, who undoubtedly pushes the envelope, is an expert on the issues.
There is perhaps archaeological evidence that the big dipper stars were anciently recognised as a constellation. In her 1954 article on "Astronomy in Primitive Religion." (The Journal of Bible and Religion, Volume 22, Number 3, July, Pages 163-171) the noted astronomer Maud Makemson (relying on the work of the pioneer French archaeoastronomer Marcel Baudouin (1860-1941, Secretary of the Societe Prehistorique Francaises) published in 1912 and 1913) reproduced what she believed was a representation of stars in Ursa Major and Boötes incised on a fossilised and silicified sea-urchin (Echinus) on an amulet from stone-age northern Europe. Her further interpretation of the amulet included: (1) that the engraver had taken care to indicate the differences in brightness of the stars by varying the sizes of the cavities, and (2) the depicted configuration of the big dipper stars indicated a high age for the origin of the amulet.
According to the archaeologist Colin Renfrew the Indo-Europeans entered southeastern Europe from Asia Minor/the Balkans circa 7000 BCE. The main constellations of the Indo-Europeans are identified by Jacques Duchesne-Guillemin as: (1) Big Bear, (2) Pleiades, (3) Small Bear, and (4) Hyades.
(4) Middle East
(1) Establishment of Babylonian Uranography
There is a lack of both astronomical and astrological texts texts in Mesopotamia until the late 2nd-millennium BCE. There is no compelling reason for assuming that the astronomical texts from the 2nd-millennium BCE relied upon astronomical texts from the 3rd-millennium BCE. The Sumerians of the 4th-millennium BCE did, however, make simple calendrical calculations based on the movements of the celestial bodies. Circa 2700 BCE the goddess Nisiba (the patron goddess of scribes) had a knowledge of astronomy attributed to her and her temple in Eres was called the "House of the Stars." She had a lapis-lazuli tablet which is sometimes called the "tablet with the stars of the heavens" or "tablet with the stars of the pure heavens." It was kept in her "House of Wisdom." It is possible that this lapis-lazuli tablet - which was connected with astronomy - was a kind of star-map or symbolic representation of the heavens. However, all the extant evidence indicates a set of constellations covering the entire visible sky was not consolidated until circa the last quarter of the 2nd-millennium BCE. This approximately matched the completion of the omen series Enûma Anu Enlil. The earliest Mesopotamian star list that covers the entire visible sky is contained in the two-tablet Mul.Apin series. The Mul.Apin series contains the earliest (surviving) full description of the Mesopotamian constellations. Because the Mul.Apin series is a compilation from various sources no single date is assignable.) It is difficult to identify the history of the text or the sources for its parts. Analysing all of the star list data in the Mul.Apin series the American astronomer Brad Schaefer has concluded (2007) that the epoch for the data comprising Mul.Apin star lists is 1370 ± 100 BCE with a latitude of 35° ± 1.2°. The actual observations to establish the data through averaging were obviously a little earlier. This corresponds with the cuneiform evidence (the omen series Enuma Anu Enlil, the Astrolabes (i.e., star calendars), the creation epic Enuma Elish) indicating that most of the Mesopotamian constellation set was established during the late 2nd millennium BCE.
The Babylonians gave single or short names to the constellations they originated. The Babylonian scheme of constellations, excepting for the development of the zodiacal scheme of 12 constellations, was mostly finalised by the late 2nd-millennium BCE (i.e., near the end of the Kassite Period circa 1160 BCE). The only significant change that took place in the early 1st-millennium BCE was the development of the 12-constellation zodiacal scheme (and the shift from the scheme of the "three ways" to the ecliptic as the primary celestial reference point). The Babylonian names for the stars forming a constellation are descriptive phrases that serve to identify their location within the constellation figure. In a section of the Mul.Apin astronomical compendium, due to the use of the horizon as reference point for a list of simultaneous risings and settings of constellations, these particular constellations are approximately identifiable. The earlier Babylonian "star calendars" (commonly misnamed "Astrolabes") do not provide any suitable information to enable the identification of the constellations. This is simply because we do not have any information regarding the principles of their categorizations. Tablet 1 of the Assyrian Mul.Apin compendium (circa 1000 BCE) contains a qualitative description of constellations and the star positions comprising such. The incomplete Neo-Assyrian text (VAT 9428, circa 400 BCE) from Assur originally contained a complete qualitative star by star description of the Babylonian constellations.
Whether or not the Mesopotamians only used a single series of constellations throughout the country at all times is unknown. Alastair McBeath (Tiamat's Brood, Page 41) states: "... Gössmann's work and other cuneiform sources argue for several variant traditions. It is possible each city-state had at least some constellations that were more or less unique to them, as with their gods." There is, however, the possibility that some particular names that occasionally appear were late and/or variant (alternative) names of constellations/stars.
(2) Boundary Stone Iconography: Constellation Symbols or God Symbols?
Babylonian boundary-stone (kudurru) iconography (Cassite Period 1530-1160 BCE) includes the following depictions:
In the early period of Assyriology it was common to identify these symbols as depictions of the zodiacal constellations. Further work in Assyriology has changed this assumption. It not established that constellations or constellation symbols are being depicted. It is established, however, that god/goddess symbols are depicted. For a recent attempt to establish the astral nature of kudurru symbols (from the Cassite Period, circa 1530-1160 BCE) see: "Eine neue Interpretation der Kudurru-Symbole," by Ulla Koch, Joachim Schaper, Susanne Fischer, and Michael Wegelin (Archive for History of Exact Sciences, Volume 41, 1990/1991, Pages 93-114). However, the attempts to date kudurru by assuming their iconography has astral significance and then using the arrangement of their iconography to establish astronomical dates is both speculative and unproven. Ursula Seidl, a present-day kudurru expert, maintains in her article "Göttersymbole und -attribute." (Reallexikon der Assyriologie (Dritter Band 3, 1957-1971, Pages 483-490)) that kudurru iconography has no astral significance. (See also her book: Die Babylonischen Kudurru-Reliefs Symbole Mesopotamischer Gottheiten (1989). In this book, regarded as the standard study of kudurru iconography, she maintains her scepticism that kudurru symbols have an astral significance.)
(3) Mul.Apin Series
The broad astronomical content and significance of the (two-tablet) Mul.Apin series had been identified by the English assyriologists Archibald Sayce and Robert Bosanquet in a journal article published in 1880. The first part of the Mul.Apin series to be published was BM 86378 in Cuneiform Texts from Babylonian Tablets in the British Museum: Part XXXIII (Plates 1-8) by Leonard King (1912). The tablet used was almost complete copy of tablet 1. Another important early article was "A Neo-Babylonian Astronomical Treatise in the British Museum and its Bearing on the Age of Babylonian Astronomy." by Leonard King (Proceedings of the Society for Biblical Archaeology, Volume 35, 1913). This article by the English assyriologist Leonard King drew attention to the importance of this text for identifying the Babylonian constellations. In the next two years numerous articles and books appeared that utilised its star list information in the attempt to identify the Babylonian constellations and the stars that comprised such.
This principal copy of tablet 1 probably dates to circa 500 BCE and is a late Babylonian copy of tablet 1 of the astronomical compendium Mul.Apin. The earliest copies were recovered from the royal archives of the Assyrian King Assurbanipal (667-626 BCE) in Nineveh (and also from Assur). The Mul.Apin series contains the most comprehensive surviving star/constellation catalogue. It is largely devoted to describing the risings and settings of constellations/stars in relation to the schematic calendar of twelve 30-day months. The text of tablet 1 was able to be completely restored with the aid of five copies - one dated to the Neo-Babylonian Period, two from Assurbanipal's library (hence written before 612 BCE), and two from Assur. The principal copy of the second tablet is VAT 9412 from Assur, dated 687 BCE. (This is the oldest of the texts.) Multiple copies of tablet 2 are known: principally three from Assur, three from Assurbanipal's library, and one dated to the Neo-Babylonian period. There are also texts of Mul.Apin in which the two tablets are combined in one large tablet. The connection of a third tablet to the Mul.Apin series, by some modern commentators, was probably only an occasionally added appendix to Mul.Apin.
The Mul.Apin series (the name being derived from its opening words) is obviously a compilation of nearly all astronomical knowledge of the period before 700 BCE. (Because the Mul.Apin series is a compilation from various sources no single date is assignable.) It is difficult to identify the history of the text or the sources for its parts. However, it is reasonably certain the origin of the Mul.Apin series dates to the Assyrian Period circa 1000 BCE. (Component parts of Mul.Apin date at least to the early first millennium BCE.) The Mul.Apin series contain improvements to the older astrolabe lists of the stars of Anu, Enlil, and Ea. Various facts make a Babylonian origin of the series probable. Everything that is known about the astronomy of this period is in some way related to the series Mul.Apin. The Mul.Apin series follows the "astrolabe" system (i.e., "three stars each" calendrical system) very closely, but at the same time, it also makes some substantial improvements.
Mul.Apin is essentially a series of structured lists grouped into 18 sections. Tablet 1 basically contains eight sections (including five star lists): (1) a list of 33 stars in the Path of Anu, 23 stars in the Path of Enlil, and 15 stars in the Path of Ea; (2) a sequential list of (Morning Rising) dates in the ideal calendar (i.e., based on a year comprised of 12 months of 30 days each) on which 36 fixed stars and constellations rose heliacally; (3) a list of simultaneously rising and setting constellations; (4) time intervals (periodicity) between the Morning Rising dates of some selected stars; (5) the visibility of the fixed stars in the East and the West; (6) a list of 14 ziqpu-stars (i.e., stars which culminate overhead as more fundamental stars helically rise) [May be deemed secondary stars.]; (7) the relation between the culmination of zipqu-stars and their Morning Rising; and (8) a list of stars and planets in the path of the moon. (The beginning of the second tablet continues the listing of (8) in tablet 1.) Tablet 2 basically has ten sections dealing with: (9) the path of the sun and the planets and the path of the moon; (10) Sirius data (rising dates) relating to the equinoxes and solstices; (11) the heliacal risings of some further fixed stars, wind directions; (12) data relating to the five planets (i.e., the planetary periods); (13) the four corners of the sky; (14) the astronomical seasons (i.e., the sun's risings on the eastern horizon on the days of the solstices and equinoxes); (15) Babylonian intercalary practice (i.e., a scheme (actually two schemes) of intercalary months); (16) gnomon tables detailing shadow lengths and water clock data (i.e., weights of water for their clocks) [A list showing, by mathematical calculations, when the shadow of a gnomon (vertical rod) one cubit high is 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10, cubits long at various seasons.]; (17) the length of a night watch on the 1st and 15th day of the month, tables of the period of the moon's visibility (Rules for calculating the rising and setting of the moon.); and (18) astral omens connected with fixed stars and comets.
A list of 17/18 stars/asterisms in the path of the moon is given. A statement that the Sun, Moon, and five planets were considered to move on the same path also appears. Reports of lunar eclipses dating from the 7th-century BCE are also recorded.
The Mul.Apin series contains the earliest (surviving) full description of the Mesopotamian constellations. Its detailed constellation material dates to the late 2nd-millennium BCE possibly relates to the Mesopotamian constellations being largely formalised around the time of the completion of the omen series Enuma Anu Enlil. The data contained in the Mul.Apin series is not quantifiable (i.e., precisely defined) and appropriate assumptions are required to be made (i.e., of the stars forming each constellation and which of these stars were listed to rise heliacally). In a Hastro-L posting (June 5, 2007) the assyriologist Hermann Hunger explained: "The tablets contain no observations. They state on which calendar date certain phenomena (mostly risings and settings) are supposed to occur. Since that calendar used real lunar months, and years consisting of either 12 or 13 such months, the date of a stellar rising, e.g., cannot occur on the same date each year. Assuming that the dates given in the text are the result of averaging, one can use them as if they were observations."
Analysing all of the star list data in the Mul.Apin series the American astronomer Brad Schaefer has concluded (2007) that the epoch for the data comprising Mul.Apin star lists is 1370 ± 100 BCE with a latitude of 35° ± 1.2°. The actual observations to establish the data through averaging were obviously a little earlier. This corresponds with the cuneiform evidence (the omen series Enuma Anu Enlil, the Astrolabes (i.e., star calendars), the creation epic Enuma Elish) indicating that most of the Mesopotamian constellation set was established during the late 2nd millennium BCE.
The inclusion of an anthology of 47 celestial omens (drawn from a variety of Mesopotamian celestial divination texts) at the end of the Mul.Apin series suggests its goal was to serve as an introduction to celestial omen literature and the practice of celestial divination. The data contained in the Mul.Apin series was functionally important in the practice of celestial divination in Mesopotamia. The intended audience for the text would have been scribes receiving practical training in celestial divination. (See: "Teaching the Stars in Mesopotamia and the Hellenistic Worlds." by Jeffrey Cooley (Humanitas, Volume 28, Issue 3, Spring, 2005, Pages 9-15).
Note: Ziqpu-stars were stars "so chosen that one crosses the meridian before dawn, in the middle of each month, as another constellation is rising heliacally." (See: Mul.Apin by Hermann Hunger and David Pingree (1989) Page 142.) The ziqpu-stars were useful if, for whatever reason, the eastern horizon was obscured and the heliacal rising of important stars was unable to be directly observed. The most common version of the ziqpu-star list contained 25 stars.
(4) Identification of Mul.Apin Constellations and Stars
Circa 1900 little was known with certainty regarding the identification of of Babylonian constellation names and star names. Though cuneiform script had been successfully deciphered for decades the meanings of numerous words either remained unknown or were incorrectly understood. The types of astronomical texts available circa 1900 were (1) late Babylonian observational texts (4th to 1st century BCE); (2) mathematical-astronomical texts (from the latest period of Babylonian astronomy); and (3) omina literature regarding celestial events and (4) a few lists of constellation/star names. The observational texts and mathematical-astronomical texts contained few names of celestial bodies - mostly the names of planets and the constellations of the zodiac. The type of information contained in the constellation/star lists in Mul.Apin tablet 1 (BM 86378), an autograph copy of which was first published by the British Assyriologist Leonard King in 1912, provided a unique opportunity for the identification of Babylonian constellations.
The primary effort in successfully identifying the constellations and star names listed in BM 86378 was carried out by first by Franz Kugler and then by Carl Bezold and August Kopff. The Kopff-Bezold results largely agree with the identifications made by Franz Kugler in his Supplement 1 (1913) to his Sternkunde und Sterndienst in Babel. Further work by later scholars largely confirmed their results. There were 16 agreements in identification between Kugler, Weidner, and Kopff-Bezold. The lower number is due to the lesser number of identifications made by Ernst Weidner.
An early study of Mul.Apin tablet 1 and the identification of Babylonian constellations with modern star groups was Zenit- und Aequatorialgestirne am babylonischen Fixsternhimmel (1913). In this publication the assyriologist Carl Bezold, with the assistance of the German astronomer August Kopff and the participation of the German philologist Franz Boll, examined the contents of BM 86378. The identification of 78 Babylonian constellations and star names is made. The 59-page pamphlet gives the transcription and (German-language) translation of BM 86378 and a detailed comparison of the results of Franz Kugler, Ernst Weidner, and August Kopff and Carl Bezold, in identifying the stars and constellations listed. The pamphlet is valuable in reproducing the particular cuneiform signs for all 78 constellations and star names investigated.
The two tablets comprising the Mul.Apin series are essentially a series of structured lists grouped into 18 sections. Tablet 1 basically contains eight sections (including five star lists):
(1) A list of 33 stars in the Path of Anu, 23 stars in the Path of Enlil, and 15 stars in the Path of Ea.
(2) A sequential list of (heliacal rising) dates in the ideal calendar (i.e., based on a year comprised of 12 months of 30 days each) on which 36 fixed stars and constellations rose heliacally.
(3) A list of simultaneously rising and setting constellations.
(4) Time intervals between the heliacal rising dates of some selected stars.
(5) The visibility of the fixed stars in the East and the West.
(6) A list of 14 ziqpu-stars (i.e., stars which culminate overhead).
(7) The relation between the culmination of zipqu-stars and their heliacal rising.
(8) A list of stars and planets in the path of the moon. (The beginning of the second tablet continues the listing of (8) in tablet 1.)
The data contained in the Mul.Apin series is not quantifiable (i.e., precisely defined) and appropriate assumptions are required to be made (i.e., of the stars forming each constellation and which of these stars were listed to rise heliacally).
Kugler in his Sternkunde und Sterndienst in Babel, Erg. 1, used lists (2) (3) and (6) and computed for 500 BCE at Babylon. Kopff used the same lists and computed for 600 BCE at Nineveh. I am presently unsure what lists Weidner used and what date and location he computed for. Later researchers used different lists. The German assyriologist Johann Schaumberger in his Sternkunde und Sterndienst in Babel, Erg. 3, used lists (1) and (2). The Dutch mathematician Bartel van der Waerden in his Anfänge der astronomie (1966) used lists (2) and (4). List (4) is compiled from list (2) and its data is most subject to inaccuracy. Many significant differences exist between the identifications made by these four scholars. Erica Reiner and David Pingree, Babylonian Planetary Omens: Part Two (1981), using lists (3) and (6) in conjunction with a planetarium projector, concluded that the data best fit the date 1000 BCE and the location of Nineveh (circa 36� north). List (3) is independent of the schematic dates of risings in list (2).Also, the simultaneously setting constellations of list (3) are clearly determined by observation. List (3) was also the foundation for the constellation identifications (and the date and place of the observations) made by Herman Hunger and David Pingree in their Astral Sciences in Mesopotamia (1999).
(5) Diffusion of Babylonian Uranography
The British assyriologist David Brown wrote ("The Scientific Revolution of 700 BC." In: Learned Antiquity edited by Alaisdair MacDonald et. al. (2003)): "The early pioneers in this field [Assyriology] concerned themselves with decipherment, largely ignoring the context in which the famed mathematical-astronomical cuneiform texts were written. They found, with particular parameters and mathematical techniques, that the evidence for transmission to Greece and thence to India in the Hellenistic period was overwhelming, and they left it at that."
During the Babylonian period astronomical knowledge was transmitted unchanged, due to the superiority of Babylonian astronomy, to all neighbouring cultures. Sometime around the middle of the 1st-millennium BCE Mesopotamian astronomical knowledge (including the accurate prediction of particular astronomical phenomena) spread westward. It had already done so during phases of the Assyrian Period. During the late 2nd-millennium BCE the astronomical knowledge summarised in the Mul.Apin series had spread to the Middle East, Greece, Iran and India. It was the Mul.Apin series that formed the basis for inter-relatedness between astronomical systems in these regions outside Mesopotamia.
Mesopotamian astronomy and cosmology were certainly known and influential in ancient Israel, especially after the Babylonian Exile - where the deported Judean priestly intelligentsia came into contact with ancient Mesopotamian science.
Elements of Babylonian astronomy are contained in Jewish apocalyptic literature and calendrical texts and in Enochic and Qumranic tradition/astronomy. The Demotic astrological texts (Egypt) are evidence of a pre-Hellenistic transmission from Mesopotamia to Egypt during the Persian Empire. Exactly how the Greeks came to learn about Mesopotamian scientific tradition is still unknown. By the 2nd-century BCE Babylonian astronomy had significantly influenced Hellenistic scientific thought.
David Pingree proposed that Mesopotamian omen-literature was transmitted to India during the Achaemenid occupation of northwestern India and the Indus Valley (late 1st-millennium BCE, circa 300 BCE). David Pingree also proposed that knowledge of Mesopotamian sciences reach India by the late Vedic period (1000 BCE to 500 BCE).
(6) Accurate Astronomical Record Keeping
Astronomical records were only zealously compiled beginning with the reign of Nabonassar (Nebu-nasir) in 747 BCE. (This period also saw the beginning of more accurate astronomical observations.) It appears the so-called astronomical diaries (and other astronomical records) were diligently written starting with this period. (The Babylonians termed the observations for the Diaries "regular watchings." Documents similar to astronomical diaries may have been written as early as the 12th-century BCE in the reign of Merodach-baladan I. See: Assyrian and Babylonian Chronicles by Albert Grayson (2000, Page 13).) In his book Sternkunde und Sterndienst in Babel, II (Pages 366-371), the polymath Franz Kugler made the suggestion that a possible reason why the Babylonians may have been motivated to begin keeping more accurate astronomical observations and records beginning 747 BCE was the spectacular conjunction of the moon and the planets in what was also the first regnal year of Nabonassar (Nebu-nasir). The evidence supports the conclusion that detailed records of a range of topics - not just astronomical phenomena - were diligently kept from the reign of Nabonassar (Nabu-nasir/Nebu-nasir) 747-734 BCE. The Babylonian Chronicle Series begins its narration with the reign of Nabonassar (Nabu-nasir/Nebu-nasir). The "Astronomical Diaries" and the Babylonian Chronicle Series are typologically similar. (See the modern discussion: "The Scientific Revolution of 700 BC." by David Brown. In: Learned Antiquity edited by Alasdair MacDonald et. al. (2003, Pages 1-12).)
The greater survival of astronomical and other records from 747 BCE onwards is considered likely due to the increased political stability of Mesopotamia and more systematic approach to record keeping. The British assyriologist David Brown has proposed (2003) "that around 700 BC, ... prediction became an all-important skill to the astronomers who practised astrological divination in the service of the Assyrian kings." ("The Scientific Revolution of 700 BC." In Alasdair MacDonald, Michael Twomey, and Gerrit Reinink, (Editors). Learned Antiquity: Scholarship and Society in the Near-East, the Greco-Roman world, and the Early Medieval West. Pages 1–12.) The primary purpose of the astronomical phenomena systematically recorded in the (astronomical) Diaries appears to have been to enable prediction of certain astronomical events.
(1) The Decan System
The ancient Egyptians used special constellations (asterisms), the decans, to divide their year into 36 parts. The decans are an Egyptian system of 36 stars/star groups (asterisms). (The term decan is from the Greek meaning "10 days apart.") The decans could be groups of stars or single bright (conspicuous) stars. Decanal "star clocks" decorated Egyptian coffin lids starting circa 2100 BCE (and ending circa 1800 BCE). They show that there was a system of 36 named equatorial stars rising within 10 days of each other (and were based on the civil calendar year). These Egyptian "star clocks" are the earliest detailed astronomical texts known.
The decans rose at particular hours of the night during 36 successive periods of 10 days each, constituting the year. A decan indicated the one and same hour during 10 days. (Each specific decan rose above the eastern horizon at dawn for an annual period of 10 days.) As the stars rise 4 minutes later night by night a given decan was replaced after 10 day by its predecessor to mark a given hour. Otto Neugebauer believed the 36 decans formed the old year of 360 days. The 5 additional or epagomenal days were "ignored" but undoubtedly were taken into account during the development of the decan system. (The earliest Egyptian calendars indicate that the 5 epagomenal days were not regarded as belonging to the year. The New Year festival begins on the 1st Thoth, not on the 1st of the epagomenal days.) A more recent view by Anne-Sophie von Bomhard is that the original decan system was designed for a year of 365 days. The Egyptian "star clocks" (i.e., decans) are the earliest detailed astronomical texts known.
(2) Location of the Decans
According to the accepted interpretation made by Otto Neugebauer in Egyptian Astronomical Texts (Volume 1, 1960), based the Book of Nut texts, the decan stars circled the sky in a zone approximately parallel to and slightly south of the ecliptic. The decans (a Greek term) lay within a wide equatorial belt and began with Sepedet (= Sirius). (Sepedet (= literally, "the excellent" but also "The Great Star") was sometimes called the "Mistress of the Year.") Sirius (Sepedet) is the only one of the decans able to be unambiguously identified. (Neugebauer's identification of the location of the decanal belt is disputed by Kurt Locher "New arguments for the celestial location of the decanal belt and for the origin of the s3h-hieroglyph." (Atti di sesto congresso internazionale di egittologia. (2 Volumes, 1992-1993.)); and Joanne Conman "It's About Time: Ancient Egyptian Cosmology." (Studien zur Altägyptischen Kultur, Band 31, 2003).
(3) Source Texts for the Decan System
The texts relating to the system of decan stars date from 2200 BCE to 1200 BCE. Decanal "star clocks" (also (mistakenly) termed "diagonal calendars") decorated the inside surface of Egyptian (wooden) coffin lids, in both drawings and texts, starting circa 2100 BCE (with the practice ending circa 1800 BCE). However, the decanal system can be identified as early as the Third Dynasty (circa 2800 BCE) and may even be earlier. (Our principal knowledge of astronomy in the Middle Kingdom period comes from wooden coffin lids, primarily from the 9th and 10th Dynasties. The painted scenes (sometimes carved) on the inside surface of the coffin lids are actually tables of "rising stars.") They are also shown on the tomb ceilings of Seti I (1318-1304 BCE) and on some of the ceilings/walls of royal tombs of the Ramesside period (12th-century BCE). They show that there was a system of 36 named "equatorial" stars rising within 10 days of each other (and were based on the civil calendar year). Pictures of decans comprise most of the celestial representations in Egyptian tombs.
(4) Purpose of the Decan System
The system of decan stars was used to indicate the hours of the night throughout the year. Lists of decans were prepared to determine the hour of the night if the calendar date was known, or to determine the decan if the hour of the night was known. The use of the decan stars for time measurement during the night likely led to the twelve-division of the period of complete darkness. Of the 18 decans marking the period from sunset to sunrise 3 were assigned to each interval of twilight. This left 12 decans to mark the hours of total darkness. The 12-unit division of the night therefore probably originated in the combining of the decanal stars with the civil calendar decades. The twenty-four division of day and night (i.e., 24 hour system) eventually derived from this. (The original 24 hour division was actually a system of "hours" of uneven length and uneven distribution between daylight and night. As early as circa 2100 BCE the Egyptian priests were using the system of 24 hours. According to one authority this comprised 10 daylight hours, 2 twilight hours, and 12 night hours. This system was obsolete by the time of Seti I. By the Ramesside period (circa 1300/1200 BCE) there was a simpler more even division of 24 hours into 12 hours of night and 12 hours of daylight each. It has been proposed, however, that the division of day and night into 12 hours each may have been initiated by the fact that the year was divided into 12 months.)
The "hours" successively marked by each decan star for an interval of 10 days were, however, actually only an "hour" of approximately 45 minutes duration. (Each decan would rise approximately 45 minutes later each night.) (The division of the hour into 60 minutes was the invention of the Babylonians.)
(5) Origin of the Decan System
The decanal system has been traced back as far as the 3rd Dynasty (circa 2800 BCE) and may be older still. The contents of coffin lids establishes that the decanal system, of dividing the night into 12 hours according to the rising of stars or groups of stars, was in place at least by circa 2150 BCE. The contents of the Pyramid Texts show that the system of decans was established by at least the 24th century BCE.
(6) The Decan System and the Civil Calendar
The primary reason for the Egyptians to study the night sky seems to have been to establish the civil calendar (which was apparently initiated with the heliacal rising of Sothis (= Sirius)) on a firm basis. (The civil calendar was the official calendar. It was a simple calculating tool that could be followed automatically. The civil calendar remained unchanged in Egypt from its establishment circa early 3rd millennium BCE until near the end of the 1st millennium BCE.) The Egyptian calendar-year on which the system of decans (star clocks) was originally constructed was the civil or "wandering" year which consisted of 12 months of 3 10-day weeks, divided into 3 seasons of 4 months each, followed by 5 epagomenal days (called "the days upon the year"/"those beyond the year"). The civil calendar had been long established when the decans first appeared on the inside surface of coffin lids of the Middle Kingdom period. Otto Neugebauer (The Exact Sciences in Antiquity, 1957, Page 82) wrote: "In tracing back the history of the Egyptian decans we discover the interaction of the two main components of Egyptian time reckoning: the rising of Sirius as the harbinger of the inundation, and the simple scheme of the civil year of 12 months of three decades each." To assist the establishment of a civil (year) calendar the sky was divided into a scheme of 36 decans, with each decan (characterised by a bright star or distinctive star group) marking 36 ten-day periods, to which was added 5 epagonal days.
(7) Decan Lists
Many Egyptian monuments incorporate lists of decans.
The decanal system involved the arrangement of 10-day intervals throughout the year. The decan lists were essentially set out in tables consisting of 36 columns with (usually) 12 rows or divisions. The columns in the tables covered the year in 10-day intervals. The rows in the tables covered the 12 decanal hours of the night. In each of the 36 columns the decans are placed in the order in which they rise above the horizon (or transit the meridian). Every 10 days the 12 hours of the night are defined/marked by a different combination of 12 successive stars. With each of the successive 36 columns the name of a specific decan is moved one line higher to its place in the preceding column (i.e., the second decan becomes the first and so on). This results in a diagonal structure (diagonal pattern) which is the reason for the early name "diagonal calendars" being given to these texts (but perhaps properly "star clocks" or "diagonal star clocks"). However, not all are arranged in a manner that would enable them to function as 'star clocks.' Regarding "diagonal calendars." A complete diagonal calendar contains 36 transverse columns.
Basically 3 lists of decans were constructed.
The comparison of all the
variations in the decan lists enables a grouping into 5 families. Three of
rising decans, one of decans in transit, and one that cannot be assigned with
certainty to either. The 5 families of decans are named from the first example
of each. The 5 families are the Senmut, Seti I A, and Seti I C families of
rising decans; the Seti I B family of transiting decans; and the Tanis family of
It is suggested that it is probably '
(8) The Two Decan Systems
There were 2 systems of decanal stars. The first (and original) system used heliacal risings. The second (and later) system used meridian transits. The second system replaced the first. There was also the third system, the later Tanis system (whose application is uncertain). The decan system is uniquely Egyptian in origin.
(9) Tanis Family of Decans
The Tanis family of decans, is found in examples from the 26th Dynasty down to the end of the 1st century CE. The Esna ceiling has the decans of the Seti I B family in a strip next to those of the Tanis family in a strip.
(10) Rising Decans
The decanal system consisted of 36 rising stars and used the heliacal risings of stars/asterisms on the eastern horizon as markers. Each period of 10 days was first marked by the heliacal rising of the next decan on the eastern horizon. They rose heliacally 10 days apart and all had the same invisible interval of 70 days prior to their heliacal rising. (At least ideally all the decans had the same duration of invisibility as their leader Sirius. All decans were invisible for 70 days between acronychal setting and heliacal rising - because of being in the light.)
By the time of the New Kingdom period (circa 1550-1100 BCE) the usefulness of the original decan system of hours had ceased. By the 10th Dynasty and 11th Dynasty the original decan system had become completely unusable and in the 12th Dynasty were subjected to a radical revision. Many old decans were dropped out and many new decans were introduced.
(11) Transit Decans
From the Book of Nut texts we can identify the introduction of a new decanal system that can be termed transit decanal clocks. This new system, termed the Ramesside star clocks, used the transiting of the meridian by decans (their culminations) to mark the night-time hours. (The time of decan transits involved the time they crossed the meridian i.e., reached the highest point in the sky (culmination).) This new method of indicating the night hours arose by combining only those stars which behave like Sirius with 10-day weeks of the civil calendar. Likewise with the previous system of decans, this attempt to substitute the culmination of stars for their heliacal rising also did not last.
The Ramesside (20th Dynasty) star clocks are star tables which measure hours by means of transits, in half month intervals (i.e., 15-day cycle/"week"). (One of the most important documents relating to Egyptian astronomy is the long table of (decan) star transits (culminations) for each hour of the night on every fortnight of the year. This is given with most accuracy in the tomb of Ramses VI) These are different star clocks to the earlier system of decans. Only a few of the stars/asterisms used in the earlier decanal star clocks are the same as, or near to, those used in the Ramesside star clocks. The evidence for these later star clocks comes exclusively from the ceilings of a number of Egyptian royal tombs of the Ramesside period (Ramses VI, Ramses VII, and Ramses IX of the 12th-century BCE). Two sets of star tables appear in the tomb of Ramesses VI, one set of star tables appears in the tomb of Ramesses VII, and one set of star tables appears in the tomb of Ramesses IX. The texts consist of 24 star clock tables (panels) for the 24 half-month intervals of one year. These particular ceilings also include other astronomical information: (1) lists of decans and their divinities, (2) constellations, and (3) the days of the lunar month.
There was no provision for the 5 epagomenal days of the year. Also, the calendrical system based on the decans was flawed by its failure to take into account the fact that the Egyptian civil year was always approximately 6 hours short and the solar year. The lack of a leap year in the Egyptian civil calendar resulted in the risings of decans becoming out of phase with it. The result was a slow progressive change took place in the relation between the heliacal rising of a decan and its date in the civil calendar. Rearrangements of the decanal order were attempted in order to counter the resulting mismatch.
(12) The Nature of the Decans: Star Clocks, or Star Calendars, or Diagonal Star Tables?
The interpretation of the rows of diagonal star tables is controversial. The operation of the diagonal clocks is not suitably established and neither is the exact length of the decanal hours. The "star clock" outline explanation given above follows the work of Otto Neugebauer and Richard Parker. In 2007 the Egyptologist Sarah Symons ("A Star's Year: The Annual Cycle in the Ancient Egyptian Sky." In: Steele, John. (Editor). Calendar and Years: Astronomy and Time in the Ancient Near East (Pages 1-33). ) proposed the more neutral term "diagonal star table." A common past term for diagonal star tables has been "star calendars." The common current term is "diagonal star clocks." However, Sarah Symons points out that just because the rows of diagonal star tables are related to the hours of the night does not necessarily mean that the tables are clocks. As early as 1936 the naturalised American astronomer Alexander Pogo ("Three unpublished calendars from Asyut." (Osiris, Volume I, Pages 500-509).) questioned whether the intended function of the diagonal star tables was as hourly timekeeping devices. In 1998 the Egyptologist Leo Depuydt "Ancient Egyptian star clocks and their theory." (Bibliotheca Orientalis, Volume LV, Number 1/2, January-April, Pages 5-43).) likewise questioned whether the intended function of the diagonal star tables was as hourly timekeeping devices. See also: Depuydt, Leo. (2010). "Ancient Egyptian star tables: A reinterpretation of their fundamental structure." In: Imhausen, Annette. and Pommerening, Tanja. (Editors). Writings of Early Scholars in the Ancient Near East, Egypt, Rome, and Greece. (Pages 241-276).
(1) Chalcolithic/Early Bronze Age
In her 2002 doctoral thesis "The moon and stars of the southern Levant at Gezer and Megiddo: Cultural astronomy in Chalcolithic/Early and Middle Bronze Ages." Sara Gardner identifies constellations, including a lion constellation (equated with Leo), existing during the Chalcolithic/Early Bronze Age (circa 4500-2200) in the Levant. (The drawings of animals in Cave 30:IV at Gezer are held to represent constellations.) In their article "The Geometry and Astronomy of Rujm el-Hiri, a Megalithic Site in the Southern Levant." (Journal of Field Archaeology, Volume 25, 1998, Pages 475-496) Anthony Aveni and Yonathan Mizrachi set out the astronomical sophistication of the construction phase of the Rujm el-Hiri complex. The numerous star depictions on wall frescos at Telēlāt Ghassūl (in modern-day Jordan) suggests that Palestine had its own independent astral beliefs from very early times (circa 4th-millennium BCE).
Canaanite astral beliefs precede the Assyrian period of domination beginning circa 6th-century BCE. Prior to the time of Assyrian domination the Palestinian mother-goddess in her own right had astral attributes, and was sometimes regarded as an astral goddess. She was represented as an astral goddess at Ugarit, Megiddo, Gezer, Bethshan, and Tell es-Safi. On a bronze plague from Ras Shamra (Ugarit) the mother goddess is portrayed standing on the back of a lion which has a star imprinted on its shoulder. It is thought that this star was likely Regulus.
(2) Uranography of the Hebrews
The uranography of the Hebrews is fraught with difficulty and remains controversial. Hebrew star and constellation names most often bear little or no resemblance to star names in Mesopotamian uranography. In the Old Testament stars were believed to be animate bodies with names who ruled over the night.
In all likelihood the zodiac was known to the Hebrews (Israelites) in biblical times.
The amount of astronomical knowledge contained in the Hebrew Bible appears to be quite limited. There are no strictly astronomical texts in the Hebrew Bible (the Tanakh or "Old Testament") or other Hebrew literature such as the Talmud. There are, however, some calendar and constellation references. The Hebrew Bible does mention several individual stars and constellations. However, the astronomical references in the Hebrew Bible do not distinguish planets from stars. Our knowledge of Hebrew (Israelite) astronomy depends almost entirely on 3 Old Testament passages that refer to specific stars and constellations: Amos 5:8, Job 9:9, and Job 38:31-32. The number of constellations mentioned on the Hebrew Bible is small. (Only the word mazzārot (mazzāroth) in Job 38:32 can be understood as "constellations.") Also, they mostly appear in poetical references, and their identification remains uncertain. Though the extent of astronomical terminology and observation attested to in the Hebrew Bible is not very great scholars are generally agreed that there is enough evidence to conclude that it represents only a small amount of the astronomical interest and lore that the ancient Hebrews (Israelites) possessed. It is generally agreed the Hebrew Bible shows knowledge of the constellations of the Greater Bear, Orion, and the Pleiades (as asterism in the constellation Taurus). The clearest references are: (1) Kesil (mentioned in Job 9:9, Job 38:31, Amos 5:8, and Isaiah 8:10) which is usually identified as the constellation Orion; (2) Kimah (Job 9:9, Job 38:31, and Amos 5:8) which is variously identified as the Pleiades, or the stars Aldebaran, Arcturus, or Sirius; (3) Ash (Ayish or Ayis) (Job 9:9 and Job 38:32) which is commonly identified as the Hyades, or the constellation Ursa Major, or the planet Venus as the Evening Star (Venus when seen after sunset); and Mezarim which is commonly identified as denoting both the constellations Ursa Major and Ursa Minor; or another name for Ursa Major; or as a synonym for "mazzalot" and referring to the planets collectively or the constellations of the zodiac. (Godfrey Driver disputes the identification of Ash (Ayish) with Ursa Major (which goes back approximately 1000 years to Rabbi Saadia Gaon (882/892-942 CE) and Rabbi Abraham ibn Ezra (1092-1167 CE) and follows Giovanni Schiaparelli in proposing identification with Aldebaran.) It is thought that Canis Major and its bright star, Sirius, were regarded by the Israelites as animals of some kind, perhaps dogs. John McKay, in his1973 book Religion in Judah under the Assyrians (Chapter VI "Astral Beliefs in Judah and the Ancient World," gives the following identifications: Aldebaran and the Hyades ('āš / 'ayiš, Job 9.9) 'the Moth;' Ursa Major (mezārim, Job 37.9) 'the Winnowing-fan;' the Pleiades (kimā, Job 9.9, 38.31, Amos 5.8) 'the Cluster;' Orion (kesil, Job 9.9, 38.31, Amos 5.8) 'the Stout One' or 'the Clumsy Fool;' Taurus (šōr, Amos 5.9) 'the Bull;' Capella ('ēz, Amos 5.9) 'the Goat;' Virgo (mebassēr, Amos 5.9) 'the Vintager.' In Amos 5:26 Chuin is usually identified with the planet Saturn; in Jeremiah 7:18 (and elsewhere) Meleket ha-Shamayim is identified with the planet Venus; and in Isaiah 14:12 Helel is sometimes identified with the planet Venus as the Morning Star (Venus when seen before dawn). The extent of Hebrew Bible familiarity with the developed mythology associated 12 signs of the zodiac is still subject to debate. Mazzaroth, which is mentioned only once (Job 38:32), has been interpreted as signifying a constellation, the zodiacal signs, or the planet Venus (as both Morning Star and Evening Star). It is still uncertain whether the constellations of the zodiac are intended with the use of the term Mazzaroth. Whilst it is commonly stated to signify the 12 signs of the zodiac this identification remains controversial. Also, the identification of Nachash with the constellation Draco is very uncertain and controversial. Both Édouard Dhorme and Giovanni Schiaparelli identify the "chambers of the south" (hadrê tēmān) with the stars of Argo, Centaurus, and the Southern Cross.
(1) Pre-Islamic Bedouins of the Arabian Peninsula
Astral gods/goddesses were widely established/venerated in ancient Southern Arabia where an astral cult known as Sabaeanism prevailed. The religion of South Arabia was essentially a planetary astral system in which the cult of the moon-god prevailed.
Before their contact with Greek-based astronomy through Arab-Islamic civilisation the pre-Islamic Arabs (of the Arabian Peninsula has their own folk astronomy. They knew the fixed stars and asterisms and used a number of fixed stars and asterisms, the so-called anwā', for a variety of purposes. After the introduction of Islam in the 7th-century CE a substantial amount of poetry, proverbs, legends, and folk science was written down in Arabic texts. Some attention was focused on the star lore of the pre-Islamic and early Islamic Bedouins and farmers of the Arabian Peninsula. Specifically, from the 9th-century onwards Arabic-Islamic lexicographers and philologists collected old Arabic folk astronomy in books called Kutub al-anwā' (Books of the anwā') (Note: Most modern scholars simply write anwa'). From these books more than 300 old Arabic names for stars and asterisms have been recovered.
It is usually stated that eventually the folk tradition of Arabic star names was preserved as the "lunar mansions." This would appear to be erroneous. The system of "lunar mansions" are a type of almanac for seasonal activities. Daniel Varisco ("Islamic Folk Astronomy." In: Selin, Helaine. (Editor). (2000). Astronomy Across Cultures. (Pages 615-650).) states: "The claim that the formal model of twenty-eight lunar mansions originated as a set sequence of asterisms from a pre-Islamic star calendar cannot be sustained." The Arab-Islamic concept of "lunar mansions" appears to have been borrowed from India. (Knowledge of the Indian lunar zodiac may have existed in the Arabian Peninsula in the late 4th- or early 5th-century prior to the birth of Muhammad.
Within the post-Islamic (Arab-Islamic) tradition al-Sufi's book on the constellations Kitab suwar al-kawakib (Book of the constellations of the Fixed Stars) written in the 10th-century CE was of fundamental importance. It was the first critical revision of Ptolemy's catalogue of fixed stars (included in his Almagest). However, al-Sufi adopted Ptolemy's basic scheme and pattern of constellations. He did not add or subtract stars from Ptolemy's star list and neither did he re-measure their (frequently incorrect) positions. However, in order to account for precession over the time between Ptolemy and his own day, al-Sufi updated the positions of the listed stars by adding 12º 42' to all of the longitudes. In his book al-Sufi included 2 drawings of each constellation figure: one as seen in the sky from earth and one reversed as seen when looking at a solid globe. He then included a paragraph of notes for each constellation. In these notes he discussed: (1) the problem of identification and errors in Ptolemy's coordinates, (2) variants of the names for individual stars, including old Arabic star names that predate Arab contact with Greek astronomy. He also numbered the stars on the charts and keyed them to the star list accompanying each.
(2) Post-Islamic Arab-Islamic Classical World
To avoid misunderstandings the term Arab-Islamic needs to be defined. Arabic is a linguistic term identifying Arabic language users and the use of Islamic has the sense of civilisation rather than religion. (The term Arab-Islamic = linguistic-cultural; not ethnic-religious.)
"The star names used in the classical Islamic world were derived from two distinct sources: (1) the various (non-standardised) names originated by pre-Islamic groups of Bedouins (the nomadic desert Arabs of the Arabic Peninsula) (older body), and the main body (younger group) of indigenous Arabic star/asterism names were probably formed in the period 500-700 CE (prior to the introduction of Islam in the 7th-century CE); and (2) those transmitted from the Greek world. As Greek astronomy and astrology were accepted and elaborated, primarily through the Arabic translation of Ptolemy's Almagest, the indigenous Bedouin star groupings were overlaid with the Ptolemaic constellations that we recognize today." (Islamicate Celestial Globes by Emilie Savage-Smith (1985) Page 114.) "A third set of names derived from the Arabic were bestowals, often ill-based, by early modern Western astronomers even though they had never been used by Arabian astronomers. Most of these names have disappeared. Thuban, alpha Draconis, is an exception." (Early Astronomy by William O'Neill (1986) Page 162.) Both Emilie Savage-Smith and William O'Neill are reliant on the fundamental studies of Paul Kunitzsch. An example of the first category of star names of Arabic origin is Aldebaran from Al-Dabaran. An example of the second category of star names of Arabic origin is Fomalhaut from Fam al-Hut. An example of the third category of star names derived from Arabic is Thuban, alpha Draconis.
(3) The Demise of Arab-Islamic Uranography
During the course of the 19th-century European ideas on celestial mapping made a profound impact upon the traditional Arab-Islamic practices. By the end of the 19th-century little trace of medieval Arab-Islamic celestial mapping practices remained.
The key sources for constellations and star names are the Avestan and Pahlavi texts. The Avestan texts are earlier than the Pahlavi texts. The Avesta was committed to writing perhaps circa 3rd-cenrury BCE. (The present text of the Avesta was compiled circa 3rd- to 7th-century CE from texts that survived destruction during the conquest of Persia by the Macedonian general Alexander the Great.) The Bundahishn was compiled circa 9th-century CE from earlier texts.
In the earliest material incorporated into the Avesta there are a few references that indicate the existence of some sort of observational astronomy. There are individual yashts dedicated to the sun, moon, Sirius, and Mithras. See yashts 6 to the sun, yashts 7 to the moon, yashts to Tishtya (= Sirius) and yashts 10 to Mithra. The oldest extant Old Iranian source that makes reference to constellations is the Younger Avesta. It contains the names of two constellations only - the modern-day constellation Ursa Major (Great Bear) and the modern-day asterism Pleiades. From the names 'the seven marks/having seven marks' for Ursa Major and 'first' for the Pleiades they are clearly indigenous Iranian constellations. (Antonio Panaino states that the only constellations clearly attested in the Avestan texts are Haptoiringa with 'Ursa Major,' Titryaeini with 'Canis Minor,' and Paoiryaeini with the 'Pleiades.') The date of the first identification of Iranian constellation names is uncertain but it is thought that they can be placed in a prehistoric period of the eastern Iranian world. In the later Avestan literature, however, both constellations and star names are mentioned. These include the star Sirius, the constellation Ursa Major, the Pleiades (yashts 8:12), and the Milky Way. (There is an Avestan yashts addressed to the Milky Way (which is personified as feminine).)
Four so-called royal stars are mentioned in Siroza (Hymn) 1 and Siroza (Hymn) 2 forming part of the Khorda Avesta. These are Tishtya, Vanant (or Wanand), Satavaesa (or Sadwes), and the Haptoiringas (or Haftoreng). Only 2 of the 4 can be reasonably identified (i.e., Tishtya with the star Sirius, and Haftoreng with the stars of Ursa Major. However, many popular publications still proceed to identify Aldebaran, Antares, Formalhaut, and Regulus as the four royal stars of Persia. This error is obviously based on the 105 year old book Star Names by the amateur American star-lorist Richard Allen. (The identification Aldebaran, Antares, Formalhaut, and Regulus was first proposed by the 18th-century French astronomer and historian Jean Bailly.)
The various identifications made of the so-called four royal stars are: Tishtya has been variously identified as Aldebaran, Sirius, Arcturus, and the Summer Solstice. Vanant (or Wanand) has been variously identified as Regulus, Vega, Altair (earlier Corvus), Sirius, and Procyon. Satavaesa (or Sadwes) has been variously identified as Antares, Aldebaran, the stars of Musca Australis (the actual constellation being invented circa 1595), and Crux. The Haptoiringas (or Haftoreng) have been variously identified as Formalhaut, and Ursa Major.
(2) Bow and Arrow Constellation
The association of the benevolent Indo-Iranian god Tishtrya with the star Sirius occurred during the Achaemenid Period. It is very probable that the Avestan Titar (Titrya) (Sirius) corresponds to the Vedic Tisya (Tishya). Antonio Panaino identifies Titrya as an important Old Iranian astral divine being that is to be identified with Sirius (the brightest star in the sky). The 8th hymn (Tiar Yat) of the Later Avestan corpus was dedicated to Titrya. Bernhard Forssman has proposed an entymological explanation showing it is most likely that the Vedic Tisya corresponds to the Avestan Titrya, and that Sirius has a direct and clear relationship with the three stars of Orions belt. In several mythological passages in Vedic literature the three stars comprising the asterism of Orions belt were represented as an arrow shot by Tisya. In the Avestan Yast 8.6-7 and 37-38 Titrya flies in the sky as the arrow shot by their Aryan hero archer.
The Chinese have a Bow and Arrow constellation formed by the same stars as the Mespotamian Bow and Arrow constellations. The celestial Emperor (i.e., mythical ancient Emperors) shot an arrow at the sky jackal (Sirius). In later Egypt, on the round zodiac of Denderah the Egyptian divine archeress, Satit (one of two wives of Khnumu), shoots her arrow at Sirius. The Mesopotamian had constellations comprising of Bow and Arrow (mul BAN and mul KAK.SI.DI). Sirius is KAK.SI.DI the Arrow Star (specifically the tip of the arrow). The Bow is formed from the stars of Argo and Canis Major. The MUL.APIN text states "the Bow Star is the Ishtar of Elam, daughter of Enlil." According to the polymath Franz Kugler the old Babylonian for Sirius was "weapon of the bow (= "arrow"). The Mesopotamian Bow and Arrow constellations are identifiable as the original source for the Iranian, Indian, Chinese, and Egyptian Bow and Arrow schemes.
(3) Lunar Mansions
The Indian system of lunar mansions was introduced into Iran (Persia) circa 500 CE. When the system of the lunar mansions (naksatras) was introduced into Iran (Persia) from India a completely new set of names was created for them. We have lists of the Iranian lunar mansions from 4 different sources. The Pahlavi Bundahishn contains a detailed discussion of the naksatras. (The number of lunar mansions listed in the Pahlavi Bundahishn is 27.)
The ancient Greeks are the main source of present-day Western constellation names. The ancient Greek constellations are a combination of mythology and science. The Greeks had only a few named constellations established by the time of Homer circa 800 BCE. There was no early intention by the Greeks to constellate the entire sky. Circa 800 BCE they only named the most prominent stars and established the most obvious constellations. Some 400 years later they adopted and modified Mesopotamian uranography, but applied their own constellation myths to the result. Hence the classical Greek constellation set represents a mixed Babylonian and Greek tradition. Greek astronomy began with the organisation of the more prominent stars into constellations. Bernard Goldstein and Alan Bowen have proposed that the original motivation for Greek astronomy was the construction of star calendars (parapegmata, which correlated dates and weather phenomena with the risings and settings of the stars).
(2) Star Names in Homer and Hesiod
The star names Sirius ("Scorcher") and Arcturus ("Bear Watcher") are mentioned by Homer and Hesiod in the 8th-century BCE. Homer and Hesiod were two of the earliest Greek poets. Hesiod, a poet and farmer in Boeotia, a region of central Greece, likely lived about the same time or shortly after Homer. The earliest constellation/astral myth of the Greeks appears in Homer's Iliad (and was likely ancient at this time). It is the myth of Orion becoming a constellation after his affair with Eos (Dawn). The astral myth of Orion was first told in full in Hesiod's (now lost) Astronomy. (Fragments of Hesiod's Astronomy were summarised in the Catasterismi by the pseudo-Eratosthenes.) Homer's attention in the Iliad and the Odyssey is directed mainly to constellations (Great Bear, Boötes, Orion, and the Pleiades). Hesiod's attention in the Works and Days is directed mainly to individual stars (Sirius and Arcturus).
The Greeks never thought of constellating the entire visible sky until circa the 5th-century BCE when Greek astronomy proper began. Around this time the Greeks adopted (and adapted) the Babylonian zodiac and other Babylonian constellations. By circa 400 BCE (likely under the influence of Babylonian uranography) the Greeks had, by borrowing and invention, established the majority of the 48 classical constellations.
(3) The Star Myths of Eratosthenes and Hyginus
The first Greek works which dealt with the constellations were books dealing with star/constellation myths. The main sources for Greek star myths were the now lost works of Hesiod and Pherecydes of Syros (philosopher, flourished 6th-century BCE). The most complete extent Greek works dealing with the mythical origins of the Greek constellations are the later works Catasterismi by the (conventionally called) pseudo-Eratosthenes (a Hellenistic writer) and De Munitionibus Castrorum by the (conventionally called) pseudo-Hyginus (an early Roman writer). The Phainomena of Aratus was also a source of star myths. Each of these authors drew extensively from the writings of older sources such as Homer and Hesiod, and their successors. They provide a clear overview of the stories that lay behind the present-day Western constellations we use.
Circa the 5th-century BCE many of the constellations recognised by the Greeks had become associated with myths. Both the star catalogue (constellation description) of Eudoxus (4-century BCE) and the star catalogue (constellation description) of Aratus (3rd-century BCE) adopted the vocabulary of myth. In his Castasterismi Eratosthenes (284-204 BCE) completed and standardised this process with each of the constellations being given a mythological significance.
"The constellations, as they were described in Greek mythology, were mostly god-favoured (or cursed) heroes and beasts who received a place in the heavens in memorial of their deeds. They were regarded, as semi-divine spirits, living, conscious entities who strode across the heavens. (Theoi Greek Mythology: www.theoi.com/Cat_Astraioi.html)"
In Greece, connecting constellations to stories aided the memorisation of the numerous star groupings that were developed. The content of the astral myths was independent of astronomical function.
(4) The Phaenomena of Eudoxus
Actual descriptions of constellations in Greece existed as early as Eudoxus, circa early 4th-century BCE). The Greek astronomer Eudoxus, circa 375 BCE, appears to have been the first person to develop a standardised map of the Greek constellations. A complete set of Greek constellations appears to have been first described by Eudoxus in two works called the Enoptron and the Phaenomena. Phaenomena was likely a revision and expansion of Enoptron.(Eudoxus appears to have been the first person to have comprehensively arranged and described (i.e., consolidated) the Greek constellation set.) The early method of the Greek astronomer Eudoxus for determining the places of the stars was to divide the stars into named constellations and define the constellations partly by their juxtaposition, partly by their relation to the zodiac, and also by their relation to the tropical and arctic circles. The complete (and standardised) constellating of the Greek sky (with 48 constellations) was possibly first achieved by Eudoxus in his work Phaenomena.
(5) The Phaenomena of Aratus
The first complete description of the Greek constellations to survive is given by the Greek poet Aratus circa 270 BCE. With only a few exceptions no actual stars are described by Aratus - only constellation figures. This method was undoubtedly inherited from Eudoxus who produced a set of descriptions of constellations in which the relative positions of stars in each of the constellations was described. Eudoxus was likely the first Greek to summarise the Greek system of constellations. The purpose of the Phaenomena by Aratus was to describe the appearance and the organisation of the constellations in the sky with reference to each other.
In the Phainomena of Aratus (circa 275 BCE) 44 constellations are named. Within the poem the constellations are descriptively arranged into two main areas, the northern constellations (including all of the zodiacal constellations), and the southern constellations. (The goal of the Phainomena was to entertain and educate the literate upper class of Greek society. The contents, especially the brief sections on seasonal signs and weather signs, are too sophisticated for ordinary farmers and sailors.) The star names mentioned by Aratus are Sirius, Arcturus, Procyon ("Forerunner of the Dog"), Stachys ("Ear of Corn," now Spica), and Protrugater ("Herald of the Vintage"). The poem of Aratus was a product of the Hellenistic Greek culture centred not at Alexandria, where scientific activity flourished, but at Athens and the Macedonian court there. The Phainomena describes the constellation figures of the night sky that embodied the cultural history and traditions of the world of Aratus.
The earliest commentary on Aratus' Phainomena was by Achilles Tatius, a Roman era Greek writer who flourished in the 2nd-century CE, and resided in Alexandria. The most influential Latin translation of Aratus' Phainomena was made by Claudius Germanicus (the Emperor Tiberius' nephew) in 19 CE.
The archaic Greek zodiac of the Aratean-Eratosthenic period was comprised of 11 figures positioned along the ecliptic. The 12 constellation zodiac of the Greek-Roman world originated in the 1st-century CE with the introduction of the Libra (in place of the Claws of the Scorpion). The different versions survive in a number of different celestial maps (likely produced to support to the comprehension of the first part of the Phaenomena) depicting either the Greek Aratean tradition or the later Latin Aratean tradition.
(6) Constellation Illustrations
Whether the Phaenomena of Aratus was actually illustrated with pictures of the constellation figures is uncertain. It is considered there is greater likelihood that there were already pictures in the Katastarismoi of Eratosthenes of Cyrene (circa 275-195 BCE) which were then later taken over into the commentaries and translations of Latin writers such as Germanicus, Cicero, Hyginus, and others. The American art historian Kurt Weitzmann held the view that the so-called Aratea (after Aratus) were in all likelihood illustrated with mythological figures for the constellations for the first time by Eratosthenes of Cyrene in the late 3rd-century BCE. There are very few sets of of early illustrations of the mythological figures associated with the Aratean constellation set. The presently known illustrations (sets) include: the Farnese globe (Roman), the Kugel globe (Roman), the Mainz globe (Roman), the Qusayr 'Amra lodge and bath house (Arab-Islamic), and Codex Vossianus (saec. IX) (Carolingian). The constellation figures are shown from the rear. (This was also the case for most Carolingian illustrated copies of the Aratea.)
(7) Greek Stellar Nomenclature
In Greek astronomy the stars within the constellation figures were usually not given individual names. (There are only a few individual star names from Greece. The most prominent stars in the sky were usually nameless in Greek civilization. If there was a system of Greek star names then it has not come down to us and also would appear unknown to Ptolemy.) The Greek name for constellations was katasterismoi. The 12 constellations/signs on the ecliptic were known as the zodiakus (circle) or zodiakus kyklos (circle of little animals). Greek constellations ("star catalogues") up to the time of Ptolemy are descriptive. The Western tradition of describing the constellations by means of describing the relative positions of the stars within the constellation figures was firmly established by Eratosthenes and Hipparchus. In their descriptions to the time of Ptolemy the constellations were defined by the Greeks by their juxtaposition (i.e., descriptive comparison of positional relationship to each other). Prior to Hipparchus (and Ptolemy) the general goal of the Greeks at least was not accurate astronomical observation but artistic and mythological education. The end result was a sort of geographical description of territorial position and limits.
(8) The Stellar Observations Timocharis and Aristyllos
Early in the 3rd-century BCE the Greek philosophers Timocharis and Aristyllus, using a cross-staff, accurately catalogued the positions (i.e., declinations) of some of the brightest stars. Timocharis, between circa 290-270 BCE, observed the declinations of twelve fixed stars. Aristyllus, continuing the program of Timocharis, observed between circa 280-240 BCE, the declinations of six more fixed stars. This is the first known Greek compilation of measured stellar positions forming a star catalogue. (See: "Ancient Stellar Observations Timocharis, Aristyllos, Hipparchus, Ptolemy - the Dates and Accuracies." by Yasukatsu Maeyama (Centaurus, Volume 27, 1984, Pages 280-310.)) It can be deemed the first true star catalogue.
(9) The Star Catalogue of Hipparchus
The first catalogue of stars over the entire visible sky probably originated with the Greek astronomer Hipparchus circa 130 BCE. One of the great achievements of Hipparchus was his (now lost) Catalogue of fixed stars. This star catalogue differed from the earlier and imprecise descriptions of the constellations. To compile his star catalogue Hipparchus apparently used an equatorial armillary sphere to measure the exact ecliptical coordinates (i.e., ecliptic latitude (angular distance from the ecliptic plane) and ecliptic longitude (angular distance from an arbitrary point i.e., the vernal equinox)) of approximately 850 stars. However, it is clear that at the time of Hipparchus a standardised system of spherical coordinates for denoting stellar positions did not exist. In the material that has survived Hipparchus does not use a single consistent coordinate system to denote stellar positions. He inconsistently uses several different coordinate systems, including an equatorial coordinate system (i.e., declinations) and an ecliptic coordinate system (i.e., latitudes and longitudes). He used declinations for about half of the 850 stars he catalogued. (In his Commentary, obviously written before his discovery of precession, the positions of stars, when given, are in a mixed ecliptic-declination system.) In his Commentary on the Phaenomena of Aratus and Eudoxus, Hipparchus largely chose to write at the same qualitative (i.e., descriptive) level as the two authors he critiqued. Only later, obviously after his discovery of precession, did he introduce a system of real ecliptic coordinates where the positions of stars are given in their latitude and longitude (and longitudes increase proportionally with time whilst latitudes remain unchanged). Hipparchus, in his Commentary, attributed Aratus' Phainomena to the earlier Phainomena of Eudoxus. He reached this conclusion after a detailed comparison of both texts. It has been suggested that Eudoxus' Phainomena is a revision of his earlier Enoptron.
The development of a system of coordinates to enable the positions of individual stars to be located accurately was first achieved in Greece; but only after undergoing considerable evolvement. The fixed stars were first located only in very vague terms. In the Works and Days of Hesiod (circa 7th-century BCE) there exists only the most rudimentary system for identifying particular stars and where to find them in the sky. A first attempt at an exact coordinate system for locating particular stars was not to occur until some 500 years later with the star catalogue of Hipparchus. In the 2nd-century BCE Hipparchus originated a star catalogue in which also he tried to give some reasonably accurate locational coordinates for the stars he listed. However, the coordinate system he used to locate the positions of the stars on his list remains unknown.
Hipparchus introduced the system of assigning Greek letters to identify the magnitudes (brightness) of the naked-eye stars in each constellation.
Approximately 300 years later Ptolemy compiled a star catalogue - likely by adding about 170 additional stars to the 850 in the star catalogue compiled by Hipparchus. Hipparchus' system of designating stellar magnitude was adopted by Claudius Ptolemy. Before Hipparchus and Ptolemy Greek astronomy focused on constellation figures rather than star positions.
(10) The Star Catalogue of Ptolemy
The final consolidation of the classical Greek star names and constellation figures was accomplished by the polymath Ptolemy circa 150 CE in his book The Great System of Astronomy. (Originally called the Syntaxis by Ptolemy and then called the Almagest by the later Arabic translators.). The earliest Western star catalogue (as we understand the term) originated with the astronomer Ptolemy (circa 140 CE). The culmination of Greek establishment of constellation (and star) names was contained in (Book VII and Book VIII) of Ptolemy's Almagest written circa 140 CE. In it Ptolemy listed 1025 (fixed) stars. For his star catalogue Ptolemy used one system of proper coordinates (ecliptic longitudes and latitudes) for all the stars listed in it. (Interestingly, the Roman historian Pliny the Elder (1st-century CE) mentioned the existence of another star catalogue of 1600 stars existing some 75 years prior to Ptolemy's star catalogue.)
Ptolemy did not identify the stars in his catalogue with Greek letters, as is done by modern astronomers. The star catalogue in Ptolemy's Almagest lists over 1000 stars with their coordinates and magnitudes. Only a dozen stars are given proper names. The remaining stars listed are identified with descriptors of their places within the constellations. Each of the 1025 stars listed was identified (1) descriptively by its position within one of the 48 constellation figures; then (2) by its ecliptic latitude and longitude; and then (3) its magnitude. It is this particular star catalogue method of Ptolemy that enables us to identify, with considerable exactness, the boundaries (i.e., shape) of the ancient Greek constellations. The constellation scheme described by Ptolemy (Almagest, circa 140 CE) consisted of 21 northern constellations, 12 zodiacal constellations, and 15 southern constellations.
(11) Early Star Catalogues
The term "early star catalogues" is also commonly applied to descriptions of Greek (and Babylonian) uranography prior to Ptolemy. With few exceptions these "early star catalogues", however, are distinctly different from what modern astronomers, from Ptolemy onwards, have meant by the term. With few exceptions, prior to Ptolemy star catalogues did not give the position of stars by any system of mathematical coordinates. They are instead qualitative descriptions of the constellations. They simply note the number of stars in each part of a constellation and the general location of the brighter stars within a constellation. (The type of description usually used is "near X is Y".) This cumbersome method of describing the location of stars in terms of their relative positions in a constellation was used by both the Babylonians and the Greeks. The pictorial arrangement of stars is not a star catalogue. A star catalogue proper gives accurate positions for each individual star regardless of the constellation it is grouped into. Also, the boundaries of the Greek constellations were subject to change up to the time of Ptolemy.
(1) Roman Debt to Greek Uranography
The Romans derived a considerable portion of their star lore and uranography from the Greeks. What stars/constellations the Romans had before they borrowed from the Greeks is uncertain. Both the Greek and Roman poets related fabulous stories about the origin of the constellations.
(2) Roman Uranography
The constellation Libra (the Scales) originally represented the claws of the constellation of the Scorpion. (The constellation Libra was included in the Babylonian zodiac but was later described by Hellenistic astronomers, such as Ptolemy, as "'the claws' of the great Scorpio.") However, in Roman times the star grouping was changed and the constellation Libra (the Scales) was established as a separate constellation. The claws of the constellation of the Scorpion were incorporated into the remaining stars representing the constellation of the Scorpion. According to a semi-vague account by Virgil the Roman astronomers drew back the claws of the Scorpion constellation and the constellation Libra (the Scales) was added in honour of Julius Caesar, at whose death a new star was said to have in that part of the sky.
(6) Far East
(1) The Rig Veda
The Rig Veda (basically an early collection of Hindu religious hymns) lists a number of stars. (There is no sophisticated astronomy within the Rig Veda. From internal evidence the date of the composition of the Rig Veda is indicated as being between 1500-1400 BCE.) The Rig Veda reference (i, 162:18) to 34 lights has been interpreted as referring to the sun, the moon, the 5 planets, and the 27 naksatras. (The Rig Veda gives no complete list of the naksatras.) The Rig Veda does mention 3 possible asterisms: Tisya [Tishya] (v, 54:13; x, 64:8, Aghas and Arjuni (x, 85:3). In some of the late hymns of the Rig Veda (dating approximately to the first half of the last millennium BCE i.e., 1000 BCE to 500 BCE) the astronomical knowledge is related to the content of the late second-millennium Mesopotamian astronomical text known as Mul.Apin. This Mesopotamian text includes a catalogue of some 60 constellations in order of their heliacal risings, and 17 constellations in the path of the moon, beginning with Mul.Mul (the Pleiades). (Also like Mul.Apin it has an ideal year of 360 days of 12 x 30 day months. This ideal year also appears in a late hymn of the Rig Veda and also in the Atharva Veda.)
The Rig Veda has a (incomplete) list of 27 (or 28 stars/asterisms) (naksatra), also associated with the path of the moon, and also beginning with the Pleiades (called Krttikas). Not all naksatra lie exactly in the path of the moon. (The word naksatra seems to refer to any star. Usually the naksatras were asterisms (small star groups/patterns which were assigned specific names). Technically the naksatras are the lunar mansions.) These constellations were in use (in late Vedic times) at the beginning of the 1st millennium BCE. (In later literature Indian astronomers inserted a 28th naksatra.) (Other Vedic texts similar to the Rig Veda mention constellations. Two passages in the Yajur Veda list 27 constellations. A third passage in the Yajur Veda and a passage in the Atharva Veda mention 28 constellations.) Some 20 naksatra have correspondences with the Mul.Apin list of asterisms in the path of the moon. The Indian lists of naksatras were established during the early first millennium BCE. They show striking resemblances to the Mesopotamian constellations; especially to List VI in the Mul.Apin series. Strictly, the calendars of the Vedic and Brahmanic Periods were luni-solar. The naksatras were used to mark the positions of the sun, moon, and planets.
The positions of the naksatras in the sky are not defined at all in any of the very early texts.
(2) Influence of Mesopotamian Uranography on India
David Pingree suggested (The Astral Science in Mesopotamia by Hermann Hunger and David Pingree (1999, Page 63)) that the Mesopotamian association of gods/goddesses with constellations in the late 2nd-millennium BCE probably influenced the Vedic Indians to also associate one or a set of their gods/goddesses to each of their lunar mansions (naksatras). According to David Pingree it is also likely that the influence for the development of the Indian system of naksatras originated in Mesopotamia, specifically with star list VI in the Mul.Apin series (17/18 stars/asterisms in the path of the moon). The original use of the scheme of naksatras was simply to record the location of the moon among the stars. The scheme of naksatras was later extended to an astronomical system for recording the positions of planets (and the lunar nodes). Examples of this later use are to be found in the Mahabharata. (The Vedic convention of 27 or 28 lunar mansions has survived in modern Indian calendrical practice.
It seems likely that the Mesopotamian idea of associating stars with cardinal directions is reflected in Indian texts such as the Vedic Satapatha Brahmana. The Satapatha Brahmana states the Saptarsis (Ursa Major, "Wagon" (Babylonian: MAR.GID.DA)) rise in the North, and the Krttikas (= the Pleiades, "Stars" (Babylonian: MUL.MUL)) rise in the East. The science historian and Sankritist David Pingree believed that all of the astronomical information from the Mesopotamian Mul.Apin series reached India through Iran.
Ancient Persian (Iran) was one of the intermediate stages in the transfer of Babylonian astronomical ideas to India and China. The Babylonian scheme of 12 equal divisions of the ecliptic (and then ultimately the 12 zodiacal signs) most likely reached India through a Greek intermediary sometime in the late first millennium BCE or in the early first millennium CE. Elements of Mesopotamian astronomy were transmitted to India during the Achaemenid Period (circa 550 to 330 BCE); especially during the Achaemenid occupation of the Indus Valley in the 5th-century BCE. This was a significant stage in Mesopotamian astronomy reaching India. The period of the astronomy of the early Puranic writings and early Siddhantic writings (i.e., post-Vedic astronomy) - circa 500 BCE/400 BCE - 400 CE/500 CE saw the transmission of both Greek and Persian ideas on cosmology. (It is not uncommon, however, to see exaggerated arguments for the influence of Indian astronomy on the West and the emphatic minimisation or denial of the influence of Babylonian astronomy on India.)
(1) Sky Maps
The Chinese developed their own system of constellations and these are quite different to the traditional Western system of constellations. The Chinese did not follow the Western tradition of grouping stars according to their brightness but rather grouped stars according to their location. Also, the Chinese formed their constellations from only a small number of stars. (A few (five) Chinese constellations were patterned in the same way as those used in Western Europe. These were: (1) the Great Bear, (2) Orion, (3) Auriga, (4) Corona Australis, and (5) the Southern Cross. In Chinese uranography a constellation was called a "palace," with the major star being the emperor star and lesser stars being princes.
The Chinese had been creating star maps and star catalogs since at least the 5th-century BCE. The first Chinese star charts appeared during the Warring States period (circa 475-221 BCE). (The Warring States period was just prior to the unification of China under the first emperor Qin Shi Huang (or Shih Huang Ti) in 221 BCE.) The scientific and technological achievements of the Warring States period are immensely impressive. The various feudal states all had their own court astrologers/astronomers. Chinese astrologers/astronomers began to group the individual stars into constellations with each constellation having a symbolic significance. Shi Shen of the State of Wei and Gan De (possibly) of the State of Qi (Chu) co-authored The Gan and Shi Book of the Stars. In it they accurately recorded the positions (i.e., provided equatorial coordinates) of 120 (121?) stars. It is the world's earliest star chart. (This star catalogue also included the names of constellations and other stars that had not had their positions accurately recorded.)
The fixed star registers of the 3 astronomical schools were preserved in the Kaiyuan Zhanjing (Treatise on Astrology) of the Kaiyuan Period (729 CE) from the Tang Dynasty (618-907 CE). (The earliest existing book to systematically describe the Chinese constellations was the Tianguan Shu (Monograph on Heavenly Officers) by Sima Qian (circa 145 BCE - 87 BCE). Some 90 constellations were mentioned including the 28 lunar mansions. Another feature was the Chinese sky was divided into 5 palaces.)
Circa 310 CE (immediately after the Han period) Chen Zhuo (Chhen Cho) (circa 230-320 CE), the Imperial Astronomer of the Wu State, and later the Jin court, (he lived during the Three Kingdoms (= Sanguo) period, and at the beginning of the Jin dynasty) constructed a map of the visible sky (stars and constellations) based on the astronomical schools of Shi Shen, Gan De, and Wu Xian. He combined (integrated) the three traditional star maps of Shi Shen, Gan De, and Wu Xian to form a new star catalogue of the visible sky. With additions included there were 1,464 stars and 283 (284?) constellations, and also included were an explanation and astrological commentary. Undoubtedly, in the combined star catalogue of Chen Zhou, the groups of constellations he attributed to one of the three astronomical schools his only his own chosen allocation. (It would be mistaken to believe that each of these groups of constellations were exclusively the constellations of each of the three astronomical schools used by Chen Zhou. There is no reason to suppose that each of the three astronomical schools did not take a comprehensive interest in the entire visible sky.) From this time on the new version of the Chinese sky provided by the scheme of Chen Zhou became established as the traditional Chinese sky. It was inherited by the Tang dynasty (618-907) astronomers and the Chinese sky became relatively fixed. No further significant changes occurred. Some stars were added, some star names were changed (the different star names introduced were actually synonyms), and the shapes of some constellations were changed into new groupings of stars. After the Tang dynasty the constellations were no longer distinguished according to which school they had belonged to. The later planisphere of Qian Luozhi agreed with this composite star chart constructed by Chen Zhou.
It would appear that most of the constellations of Gan De and Wu Xian were just fill-ins amongst the constellations listed by Shi Shen. Shi Shen's constellations were formed from the brightest stars in the sky. It has been commented that the constellations of Gan De and Wu Xian did not seem to exist in their own time but were later developments of star naming during the Han Period. Before the Han Period there did not exist any complete description of the sky. It remained largely unconstellated. Only 38 star names or constellation names are mentioned in pre-Han literature. These 38 stars names or constellation names were either the 28 hsiu (xiu) or were popular stars or constellations (appearing in folklore or poems) such as Niulang (= alpha Aquila), Zhinu (= alpha Lyra), and Beidou (= Ursa Major). (Later, the 7 bright stars of Ursa Major were known as Yu Ya (the Chariot) and the Milky Way was known as Tian He (Celestial River) or Yin He (Silver River).)
The well-known Dunhuang star chart is an example of the coloured star map of Qian Luozhi (Qian Lezhi). It gives a flat representation of Qian Luozhi's three-coloured traditional chart on the celestial globe (made 5th-century CE). Between 424 and 453 CE (during the Nan Dynasty) the Imperial Astronomer Qian Luozhi had a bronze celestial globe (planisphere) cast with the stars on it coloured in red, black, and white to distinguish the star listings of the three astronomers he had sourced. (The colours used had nothing to do with the observed colours of stars.) These were the first Chinese catalogues of star positions that were drawn up by the astronomers Shi Shen (Shih Shen or Shi Shi), Gan De (Kan Te or Gan Shi) , and Wu Xian (Wu Hsien or Wuxian Shi). (Shi Shen (Shih Shen or Shi Shi) listed 93 constellations; Gan De (Kan Te or Gan Shi) listed 118 constellations; and Wu Xian (Wu Hsien or Wuxian Shi) listed 44 constellations.) They created their own star maps for calendrical and astrological purposes. The positions of a number of stars were accurately determined. The stars of Shi Shen were coloured red, the stars of Gan De were coloured black, and the stars of Wu Xian were coloured white. The use of colours was due to the belief that the three astronomers had each used different methods of astrological interpretation and that is was therefore necessary to know which system to apply. On the Dunhuang star chart the stars of Shi Shen were coloured yellow (not red), the stars of Gan De were coloured black, and the stars of Wu Xian were coloured white. (Wu Xian is actually a vague (probably legendary) figure from the Yin dynasty (said to be a Minister at the time of Emperor Da Wu) circa 1200 BCE. During the later Han period some astrologers began to write in the name of Wu Xian and this practice led to the emergence of a Wu Xian astronomical school.)
(2) Early Chinese Star Maps
Some early Chinese star maps are:
(1) Star map/catalogue by Wu Xian (created circa 1200-1000 BCE) but perhaps mythical for this time. This was a partial (northern) sky star map apparently containing 44 central and outer constellations and a total of 141 stars.
(2) Star map/catalogue by Ghan De (created between circa 475-221 BCE, Warring States period). This was a partial (northern) sky star map possibly containing 75 central constellations and 42 outer constellations (= 117 constellations). (Some sources though state 510 stars in 118 constellations).
(3) Star map/catalogue by Shi Shen (created circa 350 BCE). This was a relatively comprehensive (northern) sky star map apparently containing 138 constellations, 810 star names, and the locations of 121 stars. (According to some sources it contained the 28 lunar ecliptic constellations/asterisms, 62 central constellations, and 30 outer constellations.) It may well lay claim to have been the earliest star catalogue.
(4) The book Tianguan Shu (Monograph on Heavenly Officers) by Sima qian (lived circa 145 BCE - 87 BCE) was the earliest book to describe the Chinese constellations. Some 90 constellations (500 stars) were mentioned, including the 28 lunar mansions.
(5) Star map/catalogue by Chen Zhuo (created circa 270 CE). This was a whole (northern) sky star map whose contents were a unified constellation system (integrating the records of Shi Shen, Gan De, and Wu Xian) containing 1464 stars in 284 constellations.
(6) Planetarium/star map by Qian Luozhi (Qian Lezhi) (created circa 443 CE, Nan Dynasty). This whole (northern) sky planetarium/star map used red, black, and white to differentiate stars from the different star maps of Wu Xian, Ghan De, and Shi Shen.
(7) The Dunhuang star map/catalogue (created circa 705-710 CE). It is an example of the coloured star map of Qian Luozhi (Qian Lezhi).
(8) The Suchow (Soochow/Su-chou) planisphere/star map by Huang Shang (created 1193 CE). This was a whole (northern) sky chart depicting the sky visible from central China (approximately 35 degrees north latitude). The inscription accompanying the chart states there are 283 asterisms and 1565 stars. There are, however, 313 asterisms and only 1440 stars displayed on it.
(3) Sky Divisions
The 28 lunar lodges came to form the basis of the Chinese astronomical coordinate system (i.e., reference points). The hsiu (or xiu) constellations are constantly used throughout Chinese history as precise markers of the positions of celestial bodies during the seasons. Each hsiu (xiu) has a triangular patch of the sky extending up to the North Pole. (This is because the 28 lunar lodges sliced the celestial sphere into 28 sectors similar to the sections of an orange. All lines radiated from the "orange stem" of the north celestial pole. Each of the 28 sectors contained one of the lunar lodges and the width of a sector was dependant of the size of the constellation (lunar lodge).) As the lunar lodges were spaced out, more or less, along both sides of the celestial equator, this coordinate system was usually regarded a an equatorial system. Some modern researchers, however, hold that the lunar lodges mostly followed the ecliptic. (However, Chinese astronomy generally ignored both the horizon and the ecliptic.)
Each lunar lodge was numbered and named for a constellation or asterism. The 18th lunar lodge was called Mao and was formed by the stars of the Pleiades, The 21st lunar lodge was called Shen and was nearly identical to the modern European constellation Orion.
William O'Neill (Early Astronomy from Babylonia to Copernicus (1986, Page 179) writes: "An interesting and unique feature of the hsiu was the designation of 28 circumpolar stars on approximately the same meridians as the hsiu stars. Thus even when a hsiu star was below the horizon its direction could be read from its paranatellon (a star crossing the meridian at the same time)."
The Shangshu (Book of Documents) contains a paragraph concerning 4 cardinal asterisms and is generally agreed to record the observation of stars before the 21st-century BCE. Also, a similar reference appears in the Records of the Grand Historian by Sima Qian (life dates: circa 145-90 BCE, Prefect of the Grand Scribes in the Han government, and astrologer) describing the Xia dynasty circa 2000 BCE.
A period of particular interest for the constellating of the entire Chinese sky is the Han Period (circa 200 BCE-200 CE). Prior to the Han Dynasty the constellation system of 28 lunar lodges (presumably developed in reference to the sidereal month), and little else, was established. The earliest description of the entire Chinese sky is given in the Tianguan Shu (Monograph on Heavenly Officers) by Sima Qian (circa 145 BCE - 87 BCE). In this book he mentions 91 constellations (including the 28 lunar lodges) including approximately 500 stars. It is the earliest existing book to systematically describe the Chinese constellations. Another feature was the Chinese sky was divided into 5 palaces.
The earliest Chinese historical records known are the writings on the oracle bones. Some of the inscriptions on the oracle bones (mainly fragments of turtle/tortoise shells (carapaces) and mammalian bones (i.e., the scapulae of oxen) discovered at Anyang, and which date to the Shang Period (circa 16th(but likely 14th)- to 11th-century BCE), contain some star names. (The fragments of carapaces or mammalian bones were subjected to heat and the paths made by the resulting cracks were interpreted to answer questions about current or future events.) The star names plausibly indicate the existence of a scheme for dividing the sky along the equatorial circle into 4 main divisions was being developed at the time. It is generally accepted that at least 4 quadrantel hsiu were already known in China in the 14th-century BCE. The discovery of the Shang Oracle bones makes it possible to trace the gradual development of the system of Chinese lunar lodges from the earliest mention of the 4 quadrantel asterisms. (Unfortunately the astronomical data so far found on oracle bones and deciphered have greater historical than scientific interest because we do not know the exact time and position of the astronomical occurrences recorded.)
The Canon of Yao (comprising the first section of the Shu Ching (Classic of History), dated circa 4th-century BCE, states that the 4 stars named Niao, Huo, Hsü,, and Mao) mark the 4 tropic times. The 4 tropic times correspond with the middles of the 4 seasonal quarters of the year, not with their beginnings. Much later, during the Han Period (circa 200 BCE-200 CE), the 4 stars Niao, Huo, Hsü, and Mao were identified with 4 of the 28 lunar lodges.
The system of 28 lunar lodges of unequal sectors dates back to at least the second half of the 5th-century BCE. (The hsiu (xiu) are quite unequal in size. The reason for this is to make them 'key' accurately with circumpolar stars. Some of the hsiu had to be very wide because there were no circumpolar stars to which narrower divisions could be 'keyed.') The names of all the 28 lunar lodges are inscribed on a lacquer(ed) box cover found in the tomb of Marquis Yi of Zeng. Zeng was a minor state. This is the earliest extant list of all 28 hsiu. The tomb (located on a hillside in Hupei Province) is dated to 433 BCE. The lacquer(ed) box is now kept in the Hupei Provincial Museum. (The tomb was accidentally discovered in 1977 and excavated by Chinese archaeologists in 1978.)
Since the Tang Dynasty, the 3 Yuan and the 28 hsiu (xiu) became the main structure by which the Chinese organised the stars.
(4) The So-Called Chinese Zodiac
The term "Chinese zodiac" is a misconception originating from the system of 12 Jupiter-stations. The 12 Jupiter stations do not equate to the 12 signs of the Western zodiac. One of the late Chinese systems of dividing the sky was the system of Jupiter Stations in which the equator was divided into 12 equal sectors reflecting the approximately 12-year orbital period of the planet Jupiter. (Joseph Needham pointed out the equator (and by analogy the ecliptic) was divided into 12 Jupiter-stations.) The 12 stations Jupiter passes through in one revolution around the sun were associated with 12 animals (taken from the 60-year cycle count). Chinese astrologers associated each of the 12 years with a sequence of 12 animals. Each animal represents 1 year of the 12-year Jupiter-cycle. The concept was well established in Chinese thought by the 4th-century BCE. These 12 Chinese animal signs do not correspond to the 12 signs of the Western zodiac. Also, the so-called Chinese "zodiac" is not linked to the constellations. The Chinese astrologers also related each of the 12 years of the Jupiter-cycle to one of the feudal states. The names of the 12 annual Jupiter-stations began to be used to count years in 365 BCE. The animal signs repeat themselves every 12 years. Because the sidereal period of Jupiter is actually 11.86 years Jupiter would gradually move closer towards the next Jupiter station after each year. After 84.7 years it would be found in the next station. (The apparent motion of Jupiter around the 12-stations is irregular, involving 11 retrograde movements during a 12-year cycle.) Jupiter stations were already out of step with the calendar in the 11th-century CE and had long ceased to be used in the Chinese calendar. When problems set in with the real Jupiter 12-year cycle an "fictitious" (= ideal real counter-orbital-Jupiter) moving backwards in a exact 12-year cycle was invented. This rather bizarre concept continued to be used for centuries. The 12 stations of this imaginary Jupiter were marked by 12 chen (= asterisms).
(5) Political Cosmology
The Chinese believed the sky to be the other half of the earth. They also believed the sky was a mirror of the earth. As such ancient Chinese astronomy was a political science. Each part of the sky was subdivided to correspond to the different regions of the earthly Chinese empire. The bureaucratic governing structure of China was also reflected in the sky. Chinese astronomers searched the sky for celestial changes as these were regarded as omens. The Chinese sky was intimately linked to the symbolism of the Middle Kingdom i.e., the "Central States" along the Yellow River valley.
(6) Diffusion of Indian Astronomy Into China
Indian astronomy was introduced into China with the journeys of Buddhist monks into China from the late 2nd-century to the early 11th-century CE. During this period of about 800 years an enormous amount of Indian astronomical ideas were introduced into China. This included the Indian system of lunar mansions, the 27 naksatras. This did not result in any great impact on the existing Chinese system of 28 hsiu's (xiu's). Both the Koreans and the Japanese, in part due to the political dominance of China in the region, adopted Chinese uranography.
The currently dominant view is the Korean language belongs to the Ural-Altaic group (which does not include Chinese). Archaeological evidence indicates that Altaic or proto-Altaic speaking tribes (proto-Koreans) migrated from central Asia (south-central Siberia) to the Korean peninsula in successive waves from the Neolithic Age (spanning circa 4000 BCE to 300 BCE) to the Bronze Age (spanning circa 1000 BCE to 300 BCE). They replaced the Paleosiberians who were the earlier settlers to the region. The Paleosiberians were either assimilated or driven further north. The proto-Koreans formed several tribal states that were later established as kingdoms. Koguryu[Koguryo] was Korea's first feudal state.
(2) Influence of Chinese Unranography on Korea
Korea's system of astronomy and uranography was almost completely based on China's system of astronomy and uranography. Indigenous Korean astronomical knowledge is identified in the (Korean) Bronze Age and Koguryu period (and also Goryeo period). The Koreans largely adopted the Chinese system of astronomy and uranography because China had developed astronomy and uranography very early and because China was the politically dominant ancient civilisation in the region. (In 1145 CE King Injong ordered the eminent scholar Kim Pusik to write the Samguk Sagi (Historical Records of the Three Kingdoms) after the fashion of the Chinese dynastic histories - in order to beautify the style and to supplement his information. This is the oldest extant Korean history.) The system of 28 lunar mansions was adopted by Korea. The traditional number of 282 Chinese constellations (asterisms), centred on the North Pole, are used. In the Korean system of astronomy measurements of positions were based on the equator and the north celestial pole. The celestial sphere was divided into 5 'palaces.' In addition to the polar regions 9 sections of the sky were recognised. Three large enclosures that frequently mentioned are: (1) Tianshi (the Celestial Market), which lies mostly in our constellation Hercules; (2) Taiweigong (often abbreviated to Taiwei) (the Grand Forbidden Palace), which occupies much of our constellations Leo and Virgo; and (3) Ziweigong (often abbreviated to Ziwei) (the Purple Forbidden Palace), which occupies the north polar region.
(3) Early Depictions of Korean Uranography
The depiction of constellations dates back at least to the (Korean) Bronze Age whilst star maps date back to at least the Tree-Kingdom period circa 1st-century BCE.
(4) Dolmen Constellations
Korean astronomy (at least the depiction of constellations) originated circa during the (Korean) Bronze Age. Constellations were carved on the cover stones of dolmens which were erected in great numbers throughout the Korean peninsula. (Of the 80,000 dolmens claimed to exist throughout the Korean peninsula circa 1970 only approximately 25,000 were believed to remain circa 2000.)
The Korean dolmens are categorised into 3 types: (1) Northern dolmens (Jisangsukkwak), a high cap stone (stone lid) supported by two or four megaliths, are "table" dolmens thought to be influenced by Siberian culture; (2) Southern types of dolmens (Paduk), have megaliths between underground chambers and cap stones; and (3) Mixed dolmens, have an underground chamber covered by a stone cap stone without supporting megaliths.
The dolmens, believed to be the graves of local leaders, date between circa 2000 BCE and 200 CE. The Institute of Archaeology in north Korea has discovered over 70 dolmens in the area of Jongdong-ri in South Hwanghae Province that are inscribed with constellations. The depiction of constellations on dolmens predate active intervention with China. The constellations are marked on the top faces of the stone lids. The constellations are depicted by holes linked by groove lines. The different sizes of the holes, 10 centimetres to 2 centimetres, denote the degrees of brightness of stars. Some of the constellations depicted on dolmen cover stones include Ursa Minor and the pole star, Sagittarius, Orion, and Ursa Major. Ursa Major (the 'big dipper' asterism) is denoted by 7 holes linked in the shape of a dipper.
"The dolmens with engravings of astronomical charts are found mostly in Pyongyang, and number around 200. Before it was discovered that the holes on the surface of the dolmens represented stars, views differed as to what they might be. Some saw them as an expression of the worship of the sun or the heavens, while others associated them with funeral ceremonies. Some interpreted them as denoting the frequency of a certain ancestral rite, or the number of animals offered for sacrifice. Close examination of the arrangement of holes, however, revealed they were a representation of the constellations around the North Star.
The most well-known of these constellation patterns is found on the surface of a dolmen from Woesae Mountain in the South Pyongan Province. The cover stone of the dolmen tomb bears 80 holes, with a central hole representing the North Pole, and the others making up 11 different constellations. The size of the holes also varies throughout according to luminosity (brighter stars are larger), and when the observations were dated, taking the precession of equinoxes into account, it was determined that they represented the night sky from 2800 BC.
Constellation patterns found on a dolmen stone from the Pyongwon district in the South Pyongan Province were estimated to have been inscribed around 2500 BC, whilst the dolmen constellation found in the Hamju district of the South Hamgyong Province is dated to 1500 BC. When we look at the latter chart from the Hamju district, we can see that it is more accurate than the maps from previous eras. For instance, the holes corresponding to Great Bear and the Little Bear are more accurately distanced with reference to the pole star than in the Pyongwon chart, and stars down to the 4th-magnitude have been included.
In total, 40 constellations are displayed on the 200 dolmens in the valley of the Taedong River, including 28 from the regions around the pole star, skyline and equator. These include all the constellations visible at night from Pyongyang at 39 degrees north latitude, as well as the Milky Way and clusters of the Pleiades (the Seven Sisters). The charting of so many stars, before the invention of telescopes, is an unmatched feat in the history of astronomy. …
It is not certain why constellation maps were carved upon the dolmen tomb stones, but the general consensus is that ancient beliefs about death were linked to the worship of the heavens. This is also demonstrated by the fact that almost all the cover stones with astronomical markings are fashioned in the shape of a turtle's back. The turtle was revered by Koreans as one of the Ten Symbols of Longevity, and was believed to represent eternal youth. By making tombstones in the shape of a turtle, the people of ancient Korea believed they could enjoy a long life in the afterworld, and receive protection from the Turtle God. Representative of Korea’s prehistoric era, and recording something of the knowledge and culture of the age, the dolmens are an important part of the ancient history of East Asia." (Fifty Wonders of Korea, Volume 2: Science and Technology, 2008, Pages 13-15)
(5) Mural Tombs
Murals in tombs belonging to the Koguryu period frequently contain pictures of constellations. (Many old graves in East Asian countries - in the period from the 3rd-century BCE to the 8th-century CE - have paintings on the walls and ceilings.) Koguryu mural tombs were painted from the 4th century CE to the 7th century CE. As of 2008 25 tombs have been so far discovered to have constellation paintings. Kim Il-gwon summarises ("Analysis of the Astronomical System of Constellations in Korguryo Tomb Murals." The Review of Korean Studies, 2008, Volume 11, Number 2, June, Pages 5-32): "Analyzing these constellation tombs, four guardian deities as mystical animals (Blue Dragon, White Tiger, Vermilion Phoenix, and Black Warrior) were guardians of this world. I discovered that Koguryo people developed Sasook-do, a unique constellation system of four directions that is in charge of guarding the cosmos. It consisted of the Big Dipper on the north ceiling, the Southern Dipper, the Eastern Double Three Stars, and the Western Double Three Stars. Each corresponds with the Great Bear, the Archer, the Scorpion, and Orion, respectively. The Three Polar stars are placed in the center and are enlarged according to Osook-do, a five constellation system for directions. The Southern Dipper was a very important constellation rarely seen among Chinese mural tombs during this period." The ceiling lid of Jinpha-ri Tomb Number 4 has more than 130 stars engraved on it (without constellation markers). Constellations decorating the ceilings of Goryeo period tombs copied the Koguryu period pattern.
(6) Early Korean Star Maps
Star maps are known to have existed in Korea as early as the Tree-Kingdom period circa 1st-century BCE to 10th-century CE.
The Pacific was explored and settled by people in two major episodes. The first major episode occurred during the late Pleistocene period (between approximately 50,000 and 30,000 BCE), when water crossings were made from mainland Asia through a chain of large and close islands stretching towards Australia and New Guinea (which were then joined together at that time of low Ice-Age sea level). The second major episode of voyaging and settlement began after 1500 BCE in modern geological times, and after millennia of maritime developments in Island Southeast Asia and western Melanesia. Highly skilled navigators used sophisticated outrigger and double-hulled sailing canoes to voyage into the remote Pacific Ocean.
(2) Oceanic Migration
The Pacific islands are usually divided into the 3 geographical and cultural regions of groups of Melanesia, Micronesia, and Polynesia. The people of Polynesia share a common ethnic identity.
It was the Lapita culture that made the first substantial colonisation effort of the Oceanic region. The Lapita culture is the name given to the founding cultural group to initially settle the Oceanic region between circa 2000 BCE and 1000 BCE. Lapita is an archaeologically constructed culture. (The Lapita pottery culture is named after the site of Lapita, in New Caledonia, where some of the first pieces of distinctive Lapita pottery were found.) Lapita pottery represents an early community of culture in the southwest Pacific. Lapita pottery has now been discovered at nearly 200 sites spread out in a series of mostly coastal settlements, from Aitape to Samoa. Lapita culture shared linguistic, biological and cultural traits. The Austroneasian ancestors of the Lapita culture came from Southeast Asia and were master seafarers and mobile colonisers. Their initial peopling of the Oceanic region focused on "Remote Oceania" (the area east of the Solomon Islands). It was here that the distinctive Lapita culture is believed to have developed. (Some authorities hold that the Lapita culture originated on Tonga and Samoa.) The settlement of Oceania, by Austronesian-speaking peoples comprising the Lapita culture, beyond the Solomon Islands commenced circa 2000 BCE-1500 BCE. (Archaeological evidence indicates that circa 1500 BCE human colonisation began to appear beyond the Solomon islands. By circa 1000 BCE people had arrived in Samoa, in western Polynesia.) Lapita pottery has been found at more than 200 different places on islands in a broad arc of the southwestern Pacific from Papua New Guinea to Samoa. The Lapita culture comprised agricultural populations with skilled techniques of canoe navigation for ocean voyages. Within 400 years the Lapita culture had spread over an area of 3400 kilometres - colonising the remaining Oceanic area (i.e., Melanesia, Micronesia, and Polynesia). The first human settlements in the Caroline Islands of Micronesia and the Marquesas date to circa 500 BCE to 100 CE. Hawaii was settled circa 400 CE and New Zealand was settled circa 1000 CE. This series of migratory journeys initiated by the Lapita culture made their culture the prototype for Oceanic systems of astronomical knowledge.
At the time of initial European discovery and settlement similarities in systems of astronomical knowledge existed between different island groups comprising Melanesia, Micronesia, and Polynesia. However, because they were a scriptless people very little has been preserved.
(3) Star Path System
There was a widespread Oceanic practice of using "star paths" or guiding stars for inter-island and inter-archipelago navigation. According to Maud Makemson, an authority on Hawaiian star lore, the ancient Polynesians believed the sky was a dome (or inverted bowl) resting upon the rim of the hemispherical earth where a star proceeded along a path which passed over certain islands. The Polynesians had names for over a hundred and fifty stars. A Polynesian navigator would have known where and when a given star rose and set, as well as which islands it passed directly over. Thus a Polynesian navigator would have then been able to sail toward the star known to be over their destination, and as it moved westward with time the navigator would then set course by the succeeding star which would have then moved over the target island. The Hawaiian term for "steering star" was kawinga meaning "that which is steered for." Ancient Hawaiian astronomical terms include kaniwa meaning the Milky Way and Mata-liki meaning the Pleiades. Reconstruction of specific star names is difficult but includes takulua meaning Sirius and fetuqa-qaho (literally "star-day") meaning Venus as morning star.
(4) Special Constellations
Both Orion and the Pleiades had a special place in Polynesian society.
Maori of New Zealand
The Maori first reached New Zealand circa 1300 CE. Maori ancestry can be traced back to the inhabitants of the Bismark Archipelago east of New Guinea circa 1500 BCE. However, their immediate previous homeland was in the islands of central east Polynesia.
In New Zealand the Maori recognised and named various groupings of stars and also named individual stars. They had names for all the brighter stars and a large number of constellation names also. Only a few of the Maori constellations/asterisms correspond to modern Western constellations. The most extraordinary Maori constellation was the Canoe of Tama-rereti [Tamarereti], the mythical ancestral canoe of the Tainu people. (Te waka a Tamarereti = The Canoe of Tamarereti [This mythical ancestral canoe was possibly connected with an ancient Polynesian navigator and voyager.].) This was an important constellation. This extremely large constellation extended across the sky from Taurus and Orion to Crux. The Canoe of Tama-rereti consisted of the following parts: the Pleiades formed the prow, the 3 stars of Orion's belt formed the stern, the Hyades formed the inverted triangular Sail [Mast], the distant Southern Cross formed the anchor, and the two bright pointer stars trailing behind the Southern Cross (alpha and beta Centauri) form the cable. Near Orion was "the net" (Tehao o Rua). In some lore the star Sirius is stated to guide the canoe. The canoe seems to sail along the Milky Way. (The Torres Strait Islanders (Australia) also have an extremely large canoe constellation named: Tagai the fisherman (or warrior) on his canoe.) In Maori New Zealand both Orion and the Pleiades (Matariki) competed with each other for control of the year.
The Maori also recognised both the Large and Small Magellanic Clouds and also the Coal Sack in the Southern Cross constellation.
(1) Native American Star Lore
North America is home to numerous independent Native Indian cultures. Circa 12,000 BCE Asian hunter-gatherers crossed the frozen Bering Strait leading into the American continent and eventually separated into tribes. It is impossible to know when the first Americans developed a knowledge of astronomy and developed star lore. (Another theory holds that the first Americans may have used boats to make the crossing into North America. There is a growing viewpoint that the first people to enter the American continent were skilled sailors who came by boat circa 11,000 BCE, island hopping from Siberia all the way to the coast of California.)
Native American tribes of the northeastern part of North America commonly identify the 7 key stars of Ursa Major (the "big dipper" asterism) as a bear. The identification of Ursa Major (or more accurately the stars forming the big dipper asterism) as a bear in North America largely exists in the Algonkin (Algonquin) speaking groups but also in the Plateau groups. In his 1906 article "Cherokee Star Lore." (Boas Anniversary Volume, (Pages 354-366)) Stansbury Hagar remarked that generally among the Native American Indians the most important constellations were Ursa Major (= the big dipper) and the Pleiades.
The first European mention of North American Indian astronomy was made in 1524 by the Italian explorer Giovanni di Verrazano who encountered the Narragansett Indians of Rhode Island. His account mentions that their seeding and cultivation of legumes (plants that house their (highly nutritious) seeds in double seamed pods) were guided by the moon and the rising of the Pleiades. (The use of the Pleiades as an indicator of seasonal change was recognised throughout the North American continent.
(2) Pawnee Nation
The Pawnee Nation, a (Great) Plains group of Indians comprised of different divisions/bands, originally lived near the Platte river in Nebraska. Much of the Pawnee Nation ritual was directed by the stars. The construction of the earth-lodge of the Pawnee was directly influenced by their star cult. The earth lodges they built were were miniature models of the cosmos.
Pawnee Nation constellations included the Bird's Foot (located in the Milky Way), the Bow (located between the Pleiades and the Milky Way), the Deer (possibly the belt of Orion), the Pleiades, the Snake (with Antares forming the head), the Swimming Ducks (located near the Milky Way), and the Stretchers (the bowls of the Big and Little Dipper asterisms). It is now difficult to identify the star comprising most Pawnee constellations.
The Skidi band of the Pawnee Nation possessed a rich and detailed star lore tradition. Much of what is known about the starlore of the Skidi Pawnee comes from James Murie whose mother was Skidi Pawnee. He worked primarily with the anthropologists Alice Fletcher and George Dorsey. The Skidi band were organised by the stars. They developed an intricate and direct affinity to the stars that was unmatched by any other Native Indian group. Stars controlled the position and ceremonies of the villages comprising the Skidi band. The placement of the Skidi band villages reflected the position of their stars in the heavens. The 4 world-quarter stars that controlled the the position and ceremonies of the villages comprising the Skidi band was early thought to have possibly been the 4 stars forming the body of the constellation of Ursa Major. More recently the 4 world-quarter stars are thought by Von Del Chamberlain to be Capella, Antares, Sirius, and Vega.
(3) The Thunderbird Constellation
According to Nancy Maryboy and David Begay in their book, Sharing the Skies: Navajo Astronomy (2010), the Navajo thunderbird constellation comprises the stars of the Greek constellation Sagittaurius. However, the Native American thunderbird constellation has been alternatively identified as being above Scorpius and incorporating some of the stars of Ophiuchus, Serpens Caput, and Serpens Cauda.
(4) The Problem of Authentic Traditional Native American Star Lore
It is commonly held that the existence of certain parallels between Siberian/Asian star lore and North America star lore relating to the big dipper asterism establishes a pre-Columbian origin for the latter and also an Ice-Age antiquity for such. Proponents maintain that the big dipper bear constellation entered the American continent with a wave of immigrants circa 14,000 years ago. The eminent science historian Owen Gingerich, in his article "The origin of the zodiac." (Sky and Telescope, Volume 67, 1984, Pages 218-220) proposed that a bear constellation crossed the Bering Straits with ancient migrants. Gingerich acknowledges Campbell's chapter "Circumpolar Cults of the Master Bear." (Pages 147-151) in his book The Way of Animal Powers: Historical Atlas of World Mythology. Volume 1 (1983). However, the idea is problematic and remains highly controversial. Both of William Gibbon's papers on "Asiatic Parallels in North American Star Lore." (published 1964 and 1972 in Journal of American Folk-Lore) present a strong case for the common origin of the bear constellation in Asia and America. I would, however, hesitate to conclude that he has conclusively presented the case for the such. In 1902 Waldemar Bogoras published a study ("The Folklore of Northeastern Asia, as Compared with that of Northwestern America." (American Anthropologist, Volume 4, Number 4, December, Pages 577-683)) showing that many folklore tales of northeast Asian peoples had often striking similarities to the folklore tales of the Inuit and Northwest American tribes. Obviously the question is: How to account for the similarities? However, the issue is perhaps more complex and uncertain than simply arguing for Ice-Age diffusion. Both the ancient Asian and the North American cultures were bear-hunting cultures. If persons wish to maintain that the Native Americans brought the bear constellation with them circa 14,000 years ago when they entered the America continent then they need to make a suitably convincing case that can deal with problematic issues. (Paul Shepard and Barry Sanders (The Sacred Paw (1985), whilst admitting that not every Native American tribe knew of a bear in the sky, simply state: "Some, apparently, had forgotten.") The conclusive case for the early entry of the bear constellation into the Americas has perhaps yet to be incisively made. An early (i.e., pre-Columbian) Native American depiction of the Great Bear constellation would be a convincing discovery. Overall, the Native Americans had few constellations.
Today, archaeological methods have largely replaced the previous method that made almost exclusive use of ethnographic parallels in determining the history of arctic peoples, including Eskimos (Inuit). Stansbury Hagar remarked in his 1900 article ("The Celestial Bear." (Journal of American Folk-Lore, Volume 13, Number 49, Apr.-Jun., Pages 92-103)) on the Native American bear constellation: "When we seek legends connected with the Bear, we find that in spite of the widespread knowledge of the name there is by no means a wealth of material."
The identification of Ursa Major (or more accurately the 7 stars forming the big dipper asterism) as a bear constellation in North America largely exists in the Algonquin speaking groups of northeastern North America and also in the Plateau groups living in the northern part of East Oregon. (The Plateau group lived in the area between the Cascade Range on the west and the Rocky Mountains on the east and north of the Great Basin. The Plateau group culture was not stable.) It would seem that few of the southwestern Indian tribes (some 18 approximately) identified the big dipper with a bear constellation. (It all depends on who is included in the list and vice-versa who is excluded. The Zuni and Jemez are commonly excluded.) The southwestern Indian tribes tend to call the stars of the big dipper as "the seven." (An exception are the Southern Paiute who identify the big dipper as a bear. The Keresan Sia (a Pueblo tribe) also appear to identify the big dipper as a bear.) It is commonly held that apart from some very early and transient Spanish (and Portuguese) contact the southwestern tribes appear to have remained almost untouched by European influence (but not European contact) until the late 1800s. However, in her 1936 article "Riddles and Metaphors among Indian Peoples." (Journal of American Folk-Lore, Volume 49, Numbers 191/192, Jan.-Jun., Pages 171-174) Elsie Parsons observed: "The Pueblo Indians have been exposed for centuries to Spanish riddles and tales; they have taken over the tales but not the riddles." Jesuit missionaries seem to have had far-reaching contact with most other tribes during the 1500s. (See also: Pueblo Indian Folk-Tales, probably of Spanish Provenience." by Elsi Parsons in Journal of American Folklore, 1918, Volume 31; and "Spanish Tales from Laguna and Zūni, New Mexico." by Elsie Parsons and Franz Boas in Journal of American Foklore, 1920, Volume 33.) Few southwestern tribes appear to have a bear constellation. Also, it is recognised that the astronomical information that has been recorded in this region by ethnologists is frequently very confused and contradictory. (It has been pointed out that the publication Ethnography of the Tewa Indians by John Harrington (1916) needs to be used with some caution (and this includes the astronomical information on star names and constellations) as his willingness to pay for information resulted in him being misled by some informants (now called consultants). The Tewa are part of the Pueblo Indian group.) The reference(s) used by William Gibbon have the Zuni Indians and the Jemez Indians identifying the big dipper as a bear constellation. It is doubtful, however, that the Zuni can be included in the list for the Zuni identification of the big dipper as a bear constellation usually relates to references to Stansbury Hagar or Frank Cushing.
Currently archaeology and ethno-history both have an emphasis on cultural interaction. The evidence associated with the history of the pre-contact period and also the post-contact period of the Americas shows that cultural groups do not exist for any extended periods of time in total isolation and that cultural interaction has shaped even those cultural groups who lived in remote and sparsely populated regions. For the northeast region of Canada there is now sufficient archaeological and historical knowledge to understand the inter-related history of the Paleo-Eskimo, Inuit, Dorset, Beothuk and the Mi'kmaq and Abenaki cultures. These cultures largely shared the same climate and geography and their worlds often intersected. The arctic and sub-arctic regions and their adjacent coasts are increasingly recognised as longstanding "highways" rather than as barriers to the flow of plants and animals, peoples and cultures. Siberian influence in several early Alaskan cultures is now recognised, and Bering Strait sources are known for many features of Eskimo cultures found across the Arctic. The evidence reinforces the intimate relation that exists between culture and environment and it shows that climate, in particular, often plays a determining role in cultural interaction and technological innovation. The late Danish ethnologist Kaj Birket-Smith (once Curator of Ethnology at the National Museum in Copenhagen, Denmark) wrote: "The cultural link between northern Eurasia and North America is so close that the two parts should be regarded as a single circumpolar cultural district in which a similar environment forms the basis for common development."
A nineteenth-century paper by John Murdoch ("On the Siberian Origin of Some customs of the Western Eskimos." (The American Anthropologist, Volume 1, Oct., 1888, Pages 325-336)) holds that use of tobacco, fishing nets, and the bird-bolas amongst the Western Inuit originated from contact with Siberia. Such offers the prospect of a pathway for other cultural borrowing. Tobacco use literally diffused around the world within 100 years of the European discovery of the American continent. In his book The Beothucks or Red Indians (1915) James Howley held that the spear design (for killing seals), and also the technique for such, used by the Beothuks of Newfoundland was borrowed from the Eskimos (Inuit). (The last Beothuk died in captivity in 1829.) The antiquity is unknown for certain but it is now generally agreed that they were relatively recent migrants to the Americas from northeast Asia, spreading across the top of North America from west to east over the course of the past 6,000 years. (The Eskimo-Aleut migration circa 4,000 BCE populated (for the first time) the Arctic coastal zone of North America. Another migration took place more recently, circa 1,000 CE.) At Blue Hill Bay on the central Maine coast at least one stone tool found there that was made from non-native stone is made in the style of the Dorset Culture, a prehistoric Eskimo (Inuit) people. It is evidence that before European contact, the Indians living in the coastal Maine area had long-distance relationships through trade with people living in the far north.
Native American groups have always "borrowed culture" from one another. This includes stories. For an example of the dispersion of myths between Native American tribes (i.e., "The Story of the Waiwailus" from the Bella Coola to the Chilcotin) see A Guide to B. C. Indian Myth and Legend by Ralph Maud (1982, Page 85). Historically, the diffusion of agriculture throughout the Americas probably originated from the Valley of Mexico. The diffusion of the Sun Dance throughout much of North America probably originated from the Plains Area. The Algonquin who moved into North Carolina borrowed from their southern neighbours as they adapted to the geographical and climatic conditions of the area. The people of the Woodland Period in the Champlain Valley borrowed from other groups around them.
The Mi'kmaq Indians were among the first Native Americans to have contact with Europeans. This contact began in the early 1500s with the exploration of Cape Breton by the French Bretons. (In 1497 the British seaman John Cabot discovered the northeast coast of America and also reported an abundance of cod on the Newfoundland Banks.) Virginia Miller (who taught at Dalhousie University) believes there was intensive contact between the Mi'kmaqs and Europeans throughout the 16th-century (and earlier). (See: "The Decline of Nova Scotia Mi'kmaq Population, A.D. 1600-1850." in Culture, Volume 2, Number 3, Pages 107-120.) Beginning in 1501, a variety of European fishing (Basque, Spanish, French, British, and Irish) boats (comprising some 10,000 fishermen) visited the Grand Banks every summer and returned to Europe in the autumn. A few crew members stayed over the winter, past their seasonal fishing tasks, to maintain the shore installations. A few persons even resided permanently as "liveyers." By 1519 these fishermen were coming ashore to dry their catch.
Influences for the post-Columbian introduction of some European star lore and constellations to the Native Americans include: missionaries, explorers, traders (including coureurs de bois ("wood rangers") who were free traders who accompanied the Native Americans on their hunting expeditions), colonists, trappers, captives, military alliances, inter-marriage, tribal relocations (migrations and reservations), Indian schools, and ethnologists (exchanging tales). Of these early cultural contacts the key ones were French commercial connections and frequent intermarriage with Native Americans (i.e., Canada), and Spanish military and religious contact (i.e., the mission system) (in Mexico and the Southwest USA). By the 17th-century European colonists had made direct contact with most Native American communities. Some assimilation had also taken place by this early date. It is not too difficult to expect that some European constellation beliefs were transmitted to Native Americans after Columbus. In his 1906 article "Cherokee Star Lore." (Boas Anniversary Volume. (Pages 354-366)) Stansbury Hagar also remarked that the Mi'kmaq tradition of the Three Kings (= the three stars of Orion's belt) is evidently of European origin.
In the west the marriages between early French settlers with Native Americans created the Métis (a French term) of western Canada. The Métis were the result of marriages of Woodland Cree, Ojibway, Saulteaux, and Menominee Native Americans to French settlers circa the mid-seventeenth century. The Métis homeland consisted of the Canadian provinces of British Columbia, Alberta, Saskatchewan, Manitoba, and Ontario, as well as the Northwest Territories. It also included parts of the northern United States (specifically Montana, North Dakota, and northwest Minnesota.
The differences between European and Native American bear constellations does not pose a problem for late borrowing. Europe and North America have two different bear constellations. The European bear constellation is inherited from ancient Greece. The Greek bear constellation has a long tail (but modern bears have no tail). With the Greek sky-bear the stars of the big dipper form the hindquarters and tail of the bear with other forming the head and paws. The Native American bear constellation has no tail. In most North American folk-tales the 4 stars comprising the cup of the big dipper is the bear and the 3 stars comprising the handle of the big dipper are warriors chasing the bear (around the pole). However, it has been recognised that the wide familiarity of the seven big dipper stars would tend to make them readily susceptible to the influence of European star lore. For several examples of this see The Arctic Sky by John MacDonald (1998). The later movements of Native American tribes would have assisted in the diffusion of these beliefs.
Just as the European celestial bear is not the hunter but the hunted (i.e., Boötes the Bear-keeper/Bear-guard chases both the Big Bear and the Little Bear) in the Mi'kmaq myth the bear is not the hunter but, at least with one bear constellation, is the hunted. The identification of a Little Bear constellation by the Mi'kmaqs is somewhat problematic. There is every reason to believe the constellation of Ursa Minor (Little Bear) is a late Occidental invention; perhaps introduced to the Greeks from Phoenicia by the Greek philosopher Thales of Miletus circa 600 BCE. (According to the Greek historian Strabo (63/64 BCE - circa 24 CE) Ursa Minor (known as the Phoenician bear) was introduced as a superior navigational aid.) Mi'kmaq knowledge of a Little Bear constellation seems very much like a borrowing from Europeans.
During the 19th-century in North America ethnology was a branch of anthropology which focused on recording the rapidly disappearing traditional cultures and beliefs of the Native Americans. Only after 1875 did American ethnologists conduct extensive fieldwork among living Native Americans. For decades they simply concentrated on collecting reminiscences of traditional cultural beliefs from a few elderly native informants (now called consultants) who: (1) claimed to remember what life had been like in their youth, and/or (2) have knowledge of historic cultural beliefs and practices. However, as many Native American peoples were so radically altered by European influence by the time they were studied the (salvage) ethnologists were quite unable to verify what they were being told. Often they only spent limited time with their native informants (now called consultants) - a number of hours per day for up to several weeks. Evidence of significant cultural change was usually simply ignored.
At the beginning of the 16th-century, news of the rich fishing waters off the coast of Nova Scotia spread quickly in Europe. By the early 1600s missionaries had established solid contact with the Mi'kmaqs and were living amongst them. (An example is the Jesuit Pierre Biard who had a missionary station among the Mi'kmaq in Nova Scotia from 1611 to 1612. By this time there was also an immense amount of contact with fur traders and European fishing fleets.) However, the first Frenchman to master the Mi'kmaq language was the Catholic missionary Abbé Antoine-Simon Maillard. From 1735 to 1762 he lived with the Mi'kmaq Indians at Restigouche on the Gaspé Peninsula, Quebec. For the early influence of European Catholic beliefs upon Mi'kmaq religion see: "Culture Change in the Making: Some Examples of How a Catholic Missionary Influenced Mi'kmaq Religion." by Carlo Krieger (American Studies International, Volume 40, Number 2, June, 2002, Pages 37-56). Stansbury Hagar was one of the first ethnologists to collect Mi'kmaq tales. This work was conducted in 1895, 1896, and 1897. By the 20th-century the Mi'kmaq had lost nearly everything in their culture and a number of them actually became engaged in the process of borrowing from other Native American cultures - predominantly those located across the border in the USA. (An example of this is the late adoption by the Mi'kmaq of the feathered head-dress.) For a time even the Mi'kmaq language was at risk. It had largely ceased to be spoken and had been diluted by the French language.
The Indians of South America are considered to have observed the stars in considerable detail. Certainly established empires such as the Inca did so.
(2) Brazilian Indians
It is estimated that currently (2009) about 700,000 Indians live in Brazil, mostly in the Amazon region. Some 400,000 of these live on reservations. The Bakairi (or Kurâ) Indians of the Amazon basin (Central Brazil) identify the stars of Orion as a large frame on which manioc is dried. The star Sirius is the end of a great crossbeam supporting the frame from the side. The Bakairi Indians (who in 2000 CE numbered around 950 persons) live around the Xingu River in the State of Matto Grosso, Brazil. Their spoken language is part of the Karib (Cariban) family. The Taulipang Indians, a tribe in the tropical jungle of the Guianas region on the north-east/north-central coast (northern Brazil), saw the stars of Ursa Major as a barbeque grill. (Both the the Arawak Indians of mainland South America (northeastern South America) and the Warrau Indians of the Orinoco delta and adjacent swampy regions of the coast of British Guiana (northern South America) perceived the Pegasus-square as a barbeque grill.) The Taulipang see the stars of Orion as Zilikawai, the Great Man. In the stars of Leo they see a mythological figure named Tauna, the god of thunder and lightning. For the Bororo Indians of the Amazon Basin in Brazil the stars of Orion form the body of the Cayman (a crocodilian reptile) - with the stars of Lepus forming the head and stars in Taurus and Auriga forming the tail. They also identified the stars of Orion with jabuti, meaning (land) turtle. The Kobeua Indians of northwest Brazil (and east Columbia) perceived the stars of Boötes to be a Piranha. The Tukano (Tucano) Indians of the Amazon Basin, Brazil, the Siusi Indians (an Arawak tribe) of northern Brazil, and the Kobeua Indians see a crayfish in some of the stars of Leo. They also see in the stars of Scorpius the Great Serpent. The Tucano and the Kobeua see a heron in the stars of Corvus. The Siusi Indians use 6 stars in Eridani to form a dancing implement. For the Bakairi Indians the stars of Scorpius are Mother with Baby. The stars forming the constellation Crux are seen by the Bakairi Indians as the Bird Snare; by the Bororo Indians as the Great Rhea; and by the Mocovi Indians of Argentina (who are are currently (2008) estimated to number around 3,500 persons) as a Rhea under attack by two dogs. The Common or Great Rhea is one of two species of flightless ratite birds native to South America. The other species is the Lesser Rhea. Both these species are related to the Ostrich and the Emu.
(3) The Thunderbird Constellation
The Warao Indians (circa 2000 comprising some 30,000 persons) of the Orinoco Delta in northwestern Venezuela have a thunderbird constellation comprising of the stars of the Southern Cross. In the Guianas and in the West Indies the thunderbird constellation (Wakarasab = great egret) - according to the Wapisiana - includes the stars of Gemini, Cancer, and Leo. (See: Journal of American Folklore, Volumes 56-57, 1943, Page 134.)
(4) Inca Empire
The Inca Empire flourished during the 14th-century CE and the Spanish conquest in 1532 CE. The capital city Cuzco was the centre of the social, administrative, and religious organisation. Within the Inca Empire sky watching and associated sky lore was closely associated with the calendrical system that regulated agricultural practices on earth.
Throughout the Inca Empire of the south and central Andes the Milky Way (called Mayu) was a central object and was identified as the celestial counterpart of the Vilcanota river. As such the Milky Way represented the source of all moisture (wetness caused by water) on earth. Within the Incan Empire most of the named constellations and individual star lie wholly within or close to the plane of the Milky Way. The constellations are formed not only by the grouping of stars themselves but also by dark sections of the sky (due to dark clouds of interstellar dust blocking starlight). Areas of the southern portion of the Milky Way appear as silhouettes contrasted against the brighter band of sky. (A number of southern hemisphere constellations regarded them.) Individual stars or groups of stars were identified as agricultural implements or architectural structures. However, the Inca perceived a variety of (animal) constellation figures in the dark regions of the southern sky (southern portion of the Milky Way). It seems that Quechua story tellers thought of certain "black" constellations and certain star clusters as the celestial prototypes of the earthly beings they resembled. The dark constellations (or dark cloud constellations) were perceived as animals, such as the Llama dark cloud constellation. The seasonal motion of these constellations were used by the Inca to track the passage of the seasons and to mark sacred events. The Incan names for the dark cloud constellations were Yana Phuya and Pachatira. Yana Phuya (literally, 'dark cloud') was the collective name for the dark cloud constellations, and Pachatira meant 'animal constellations.'
Until recently the certain identity of the dark cloud constellations have been elusive. Numerous early Spanish chroniclers reported that the Inca identified "dark" animals in the sky in the region of the Milky Way. Some 600 years later these were finally identified by Gary Urton in 1982 as patterns formed by the contours of dark regions of the Milky Way. (These dark regions are dark clouds of interstellar matter.) The 7 dark constellations identified by Gary Urton are:
(1) Celestial Serpent (between the star Adhara in Canis Major, and the Southern Cross).
(2) Celestial Toad (near the Southern Cross).
(3) Celestial Tinamou (Yutu (a partridge-like bird), the "coalsack" below the Southern Cross).
(4) Celestial (Mother) Llama (between the Southern Cross and epsilon Scorpio).
(5) Celestial Baby Llama ("below" Mother Llama).
(6) Celestial Fox (between the tail of Scorpio, and Sagittarius).
(7) Second Celestial Tinamou (in the constellation Scutum).
Two other likely Inca dark cloud constellations, identified by Gary Urton and Giulio Magli are: The Choque-chinkay = "golden cat" (tail of Scorpio or perhaps dark spot inside tail), and the Puma (between Cygnus and Vulpecula).
Early Quechua tribes believed that the dark cloud constellations played an active part in the circulation of water. (The Quechua (speaking) people inhabited the Peruvian highlands/Andes.) The Milky Way as a celestial river was believed to be the route by which water was conveyed from the cosmic sea, to the sky, and then to the earth. The Quechua thought that when the Milky Way set below the horizon the dark cloud animals dipped into the cosmic sea to drink water before passing to the Underworld. When the Milky Way rose above the horizon the Quechua thought the the dark cloud animals conveyed the water into the atmosphere and released it as rain. The basis for the belief associating dark cloud animals with the yearly water cycle was the observation that the dark cloud animals remained below the horizon during the dry summer months and then rose above the horizon during the rainy season.
It also seems that Quechua visionaries saw certain "black" constellations and certain star clusters "descend" to earth and shower their protégés with specific vital force.
(5) The Figures on the Nazca Plain
The line figures set out on the Nazca plain have been popularly promoted as constellation figures. The German mathematician Maria Reiche, who spent some 50 years both studying and protecting the Nazca lines, believed the figures corresponded to constellations, and also thought the figures were part of an astronomical calendar. She believed the monkey figure was an ancient symbol for the Big Dipper asterism. It is now generally believed that it is unlikely that the Nazca line figures represent constellations or astronomical alignments. It is thought it is most likely that these line figures are simply maps intended to attract the Andean gods/goddesses and obtain their blessings of water and fertility for allyu (kin group) landholdings.
The United Nations has formal definitions of Northern African, Western African, Central African, Eastern African, and Southern African counties/territories. However, it is useful simple to divide Africa into two parts; countries/territories north of the equator being northern Africa and countries/territories south of the equator being southern Africa.
(2) The Extent of African Star Lore
It has been stated that apart from areas in The Sudan, northeast Africa, and Zimbabwe (formerly Rhodesia), not much of Africa has had any considerable knowledge of the stars. Also, there is a greater amount of star-lore in the south (amongst sub-Saharan Africa groups) than in the north (North Africa groups). Because it is a different sky the star-lore existing in the north is different to the star-lore existing in the south. According to Keith Snedegar (an expert on African star lore, writing in 1995): "There is no substantial evidence that the pre-colonial Africans imagined a causal relationship between celestial bodies and the seasonal patterns of life on Earth. They did, however, recognize a coincidental relationship. The traditional African cosmos, then, worked as a noetic [rational/reasoned/intuitive] principle unifying the observed motions of celestial bodies, the sequence of seasons, and the behaviour of plants and animals. ... The visibility of conspicuous stars and asterisms marked significant times of the year."
(3) North African Star Lore
The United Nations definition of Northern Africa includes the following 7 countries or territories: Algeria, Egypt, Libya, Morocco, Sudan, Tunisia, and Western Sahara.
(4) Central and East African Star Lore
The United Nations definition of Western Africa includes the countries Ghana, and Nigeria. Western African countries are located in northwest Africa. The United Nations definition of Central Africa includes the country of Angola. The United Nations definition of Eastern Africa includes the countries Ethiopia, Kenya, Mozambique, Somalia, Tanzania, Uganda, and Zimbabwe. Eastern African countries are located from northeast Africa through central-east Africa to southeast Africa.
The Masai of East Africa (Kenya and Tanzania) call the Pleiades cluster the "rain stars." This ancient warrior tribe is thought to have originated from north Africa and migrated south following the Nile valley. The speak Maa, a nilotic ethnic language.
(5) South African Star Lore
The United Nations definition of Southern Africa includes the following 5 countries: Botswana, Lesotho, Namibia, South Africa, and Swaziland.
The star-lore of the Khoikhoi (the older term Hottentot is now considered offensive) is considered quite detailed. Originally the Khoikhoi (meaning 'People People') formed part of a pastoral culture and language group that originated in the northern area of the modern Republic of Botswana. The country lies immediately above South Africa. The Khoikhoi steadily migrated south and reached the Cape of South Africa circa 2000 years ago. The Khoikhoi are one of 3 major tribes of South Africa. The other 2 tribes are the Bantu and the Bushmen. (More broadly South Africa contains approximately 12 different ethnic and cultural groups.) The Khoikhoi identify the visible planets and call Venus, as morning star, "the Fore-runner of the sun," and Venus, as evening star, "the Evening fugitive." Mercury is called "the Dawn-star." When observed "in the middle of the sky" Jupiter is called "the Middle-star." The 6 stars comprising the belt and sword of the European constellation Orion form a group called the "the Zebras." Explanations for name(s) given to the Pleiades by the Khoikhoi differ. One source states the Pleiades cluster is either called "assembly" or "the Rime-star." Another source states the Pleiades cluster is named "Khuseti" or "Khunuseh" and called "the rain stars." The appearance of the Pleiades in the sky is an indicator that the rainy season is near, and also the beginning of a new year.
The Namaquas believed the Pleiades were the daughters of the sky god. The star Aldeberan was their husband and Orion's sword was the arrow (which fell short) he shot at Orion's belt which were three zebras. The star Betelgeuse was a fierce lion which sat watching the three zebras. (The Nama or Namaqua people speak Nama and reside in South Africa. The Nama language is part of the Khoe-Kwadi (Central Khoisan) language family. The Nama people are the largest group of the Khoikhoi people, most of whom have largely disappeared as a group, except for the Namas. The Nama people originally lived around the Orange River in southern Namibia and northern South Africa. The early European colonists referred to them as Hottentots.)
The Bushmen of South Africa call the Pointers (alpha and beta Centauri) "the Two Men that once were Lions." They call the Milky Way the path of white ashes. In Bushmen lore it is believed to have been a path made by a young girl who threw the roasting roots and ashes from a fire into the sky. The red and white roots glow as red and white stars and the ashes are the Milky Way. The star Canopus is presently called the "ant-egg star" due to the time of its appearance (prominence) in the sky coinciding with the season for the abundant availability of ant-eggs.
The Sotho, the Tswana, the Xhosa, and the Zulu people of South Africa all form part of the Niger-Congo linguistic group. All possess a rich star-lore. (The Niger-Congo linguistic group of languages includes the Bantu dialects and the Kwa languages.)
The Zulu people of South Africa had a number of constellation names. The Zulu were part of the Nguni (speaking) communities who formed part of the Bantu migrations down the east coast of Africa circa the 9th-century CE. Their language isiZulu is a Bantu language. Circa 1700 CE they comprised a major clan residing in Northern Natal. The Zulu called the 3 stars of Orion's belt imPhambano. These three stars were considered to depict 3 animals, most usually wart hogs. The 3 stars of Orion's belt were named amaRoza by the Xhosa. The Xhosa people are speakers of Bantu languages residing in south-east Africa. They migrated into South Africa from the region around the Great Lakes (in an around the Great Rift Valley) and were well established in much of eastern South Africa circa 1600 CE. Because it showed them how to navigate in the bush at night the Zulus called the Southern Cross constellation the Tree of Life. For this purpose it was not only an important constellation to the Zulus but also the Sotho and Tswana of southern Africa. Sotho is a Bantu language and name of a southern African people. Sotho is spoken by the Sotho people living in Lesotho and south Africa. Tswana is also a Bantu language and name of a southern African people. The Tswana migrated from east Africa to southern Africa during the 14th-century. Both the Zulu and Xhosa name for the Pleiades cluster was isiLemila (also spelled: selemela) the "digging stars" ("ploughing constellation"). The first visibility of the Pleiades played an important role in both Zulu and Xhosa culture. Both the Zulu and Xhosa peoples were traditionally agrarian cultures and the Pleiades formed the main basis of their respective calendars. The appearance of the Pleiades (in June) marked the beginning of the planting season. The start of the new agricultural year, and the requirement to start preparing the fields, was indicated by the appearance of the Pleiades in the morning twilight. It was for this reason that the Pleiades were dubbed the "digging stars." (The Pleiades were used all over Africa as a marker of the growing season and the need to begin hoeing the ground.) The Tswana people called the stars of Orion's sword "dintsa le Dikolobe," "the three dogs chasing the three pigs" of Orion's belt. (While the constellation Orion is prominent in the night sky Warthogs have their litters (varying from 1-5 pups, but frequently litters of three).
Of the constellations which bear a name in Suto, the best known is that of the Pleiades. They call it " selemela," that is, the " ploughing constellation."
The Xhosas likened the Milky Way to the raised bristles on the back of an angry dog. The Sotho and Tswana believed it to be "Molalatladi," the place where lighting rests. They also believed it kept the sky from collapsing, and showed the movement of time.
The most recent detailed work on indigenous (Sotho, Tswana, Xhosa, and Zulu) astronomy and astronomical folk-lore in South Africa was conducted by Keith Snedegar (Associate Professor of History at Utah Valley State College) circa the early 1990s. He found that some names reflect the celestial and physical appearance of the celestial body. As example: The names for Canopus (the second brightest star in the sky) is Naka / U-Canzibel / uCwazibe meaning "brilliant" (more simply written as U-Canzibe). The name for Sirius is simply Kgogamashego / Imbal'ubusuku / inDosa meaning "the drawer up of the night." The name for the Milky Way is Molalatladil / Um-nyele / umTala meaning "a hairy stripe." A lot of other names for celestial bodies are named for local animals, both domesticated and undomesticated, sometimes with, and sometimes without, correlation between the appearance of the celestial body and the mating season or birthing season of the animal.
The bright stars of the Pointers (alpha and beta Centauri) and the Southern cross were often believed to be giraffes with different tribes having different ideas concerning which were male and female. The Venda people called these stars "Thutlwa," ("rising above the trees"), because in October the giraffes ("Thutlwa") would skim above the trees on the evening horizon, signaling the need for the Venda to finish their spring planting. (Venda is a Bantu language. Venda has been an independent homeland at the top of northern South Africa since 1979. There are also Venda speakers in Zimbabwe.)
For the Karanga people the stars were the eyes of the dead. (The Karanga people of Zimbabwe ruled a great inland African empire from circa 1000 CE to circa1600 CE.) Some Tswana believed the stars were the spirits of those unwilling to be born; other Tswana believed they were souls so long dead that they were no longer ancestor spirits.
(1) Age of Aboriginal Settlement in Australia
The indigenous Australians, generally distinguished as either Aboriginal people and Torres Strait Islanders, are the first human inhabitants of the Australian continent. (The Torres Strait Islands are at the northern-most tip of Queensland near New Guinea. They comprise over 100 islands that were annexed by Queensland in 1879. The heritage and cultural traditions of the Torres Strait Islanders is distinct from Aboriginal heritage and cultural traditions. The eastern Torres Strait Islanders are related to the Papuan peoples of New Guinea, and they speak a Papuan language.) The capitalised term Aboriginal is now only applied to traditional hunter-gatherers; the Torres Strait Islanders traditionally practised agriculture. Australian was originally reached by sea crossings by colonisers (moving on from New Guinea, and perhaps Indonesia) who lived by hunting, gathering, and fishing. These early colonisers made flaked stone tools and also some large axe-like implements. The earliest evidence for human habitation in Australia is Mungo Man dated (by consensus in 2003) to circa 40,000 BCE. His remains (actually the gender identification is not conclusive), dated to the Pleistocene Epoch, were discovered at Lake Mungo, New South Wales, in 1974. At the time of the first European contact the Aboriginal population of the Australian continent was split into some 250 individual nations. Most are quite distinct from each other. (It is also stated that there are about 400 indigenous cultures in Australia.) Though Aboriginal Australians are broadly related there are significant cultural and linguistic differences between the various Aboriginal groups. (The Australian linguist Arthur Capell (1902-1986) concluded that the linguistic evidence points to a widespread affinity between the Aboriginal languages, apart from Tasmanian.)
(2) Aboriginal Astronomy
The Australian Aborigine's knowledge of the southern sky is considered by Roslynn Haynes to be the most precise for people dependent on the naked eye. This may be an exaggerated claim. Australian Aborigines devised a seasonal calendar based on star pattern recognition in relation to the sunrise and sunset position of constellations in the sky. It appears that star pattern was more important than star brightness. A small grouping of relatively obscure stars was often identified as a pattern whilst more conspicuous single stars were ignored.
(3) Antiquity of Aboriginal Astronomy
Roslynn Haynes claims that Australian Aborigines can be identified as the world's first astronomers. This suggestion has also been made by other persons. Rosslyn Haynes at least makes the speculative claim that Australian Aboriginal myths and legends can take us back some 40,000 years because Australian Aboriginals were here some 40,000 years ago. This speculative claim rests upon the assumption that the Australian Aboriginal people practice(d) some form of astronomy that is more than descriptive, that is, having practical and interpretive content, and that these practices were implemented 40,000 years ago. No evidence for actual astronomical measurements of any kind has been identified. The first European description of Australian Aboriginal astronomy dates to 1857, was recorded by a pastoralist, and is a short record of a single tribe only (the Boorong). A variant of this claim is to use carbon dating to establish a date for a site occupancy and then use this figure to claim that date as the age for myths and legends. Excepting for a 2007 conference paper by Ray Norris ("Searching for the Astronomy of Aboriginal Australians") the problems with this sort of speculation do not seem to be readily discussed.
(4) Aboriginal Star Lore
To the Ngarrindjeri tribe of southeast Australia (the lower Murray river and western Fleurieu Peninsula/Coorong area in South Australia), and to the people on the other side of the Australian continent in coastal Arnhem Land, the constellation of the Crux (southern Cross) was a stingray. To the Ngarrindjeri people identified the "Southern Pointers" (alpha and beta Centauri) as two sharks pursuing the giant stingray. Around Caledon Bay on the north-east coast of Arnhem Land (Djapu clan land) the stars of the Southern Cross constellation is also taken to represent a giant stingray. However, the "Southern Pointers" (alpha and beta Centauri) are identified as a single shark pursuing the stingray. The Pleiades are identified as a group of young women. At Yirrkalla on the north-east coast of Arnhem Land (also Djapu clan land) the constellation Orion is identified as a canoe full of fishermen. The Pleiades are their wives in another canoe.
The general term for stars amongst the Waduman was Millijen. Venus, as the evening star, was called Illurgan. The stars of the Southern Cross were called Kamerinji. (See: Native Tribes of the Northern Territory of Australia by Baldwin Spencer, 1914, Chapter X.) The Waduman tribe inhabit the country between the Daly River and the Victoria River (i.e., between the town of Darwin and the town of Katherine) in the Northern Territory.
Amongst the Aranda tribes of Central Australia colour was an important factor in the designation of stars. They distinguished red stars from white, blue, and yellow stars. Before 1900 the Aranda tribe were one of the largest Aboriginal groups in central Australia. There were an estimated 2000 Aranda at the beginning of the 19th-century. (During the early 19th-century numbers were reduced to an estimated 200 to 300 due to disease.) Due to the sparseness of the country they were nomadic most of the time and were divided into a number of small local groups or bands, each with its own territory. The southern Aranda (south of Maryvale on the Hugh River) were almost a separate tribe. The Aranda identified the stars of the Crux (Southern Cross) as the Eagle's Foot. Likewise, in the astronomy of the Luritja speaking people/tribes of the Western Desert and the areas to the west and south of the town of Alice Springs the stars of the constellation Crux formed the Eagle's Foot.
The Euahlayi tribe who inhabit northwest New South Wales identify the stars of Corvus as a kangaroo.
Writing in 1857, Edward Stanbridge ("On the astronomy and mythology of the Aborigines of Victoria." (Transactions of the Philosophical Institute of Victoria, Volume 2, Pages 137-140.) stated that amongst the Booroung (now spelled Boorong) tribe (now vanished) inhabiting the Mallee country near Lake Tyrell the term Tourte meant star. The Boorong tribe saw the star Capella as a kangaroo named Purra. The kangaroo was pursued by 2 hunters Wajel and Yuree, the stars Pollux (beta Geminorum) and Castor (alpha Geminorum). (William Stanbridge was a pastoralist, newly arrived from England, who settled near Lake Tyrell in Victoria. (More accurately he was a Mallee squatter.) It appears he published only one additional (but lengthy) paper on the topic. It would appear that the Boorong formed a clan within the Wergaia language area.)
Both the turning round of the stars comprising Scorpio and the turning of the Milky Way were used as clocks.
(11) Diffusion and Inter-relatedness of Star Schemes
(1) Diffusion and Influence of Mesopotamian Astral Science
Mesopotamian astronomy and astrology reached most countries in the known world in antiquity.
From the 2nd-millennium BCE onwards Mesopotamian astronomy and astrology was translated and assimilated into many cultures and civilisations outside Mesopotamia, including: Persia, India, perhaps China, Anatolia (Hittites), Greece, Egypt, Judaea, and Rome. Basically, whatever fragments of Mesopotamian astronomy became available to these cultures and civilisations were collected and utilised. During the late 2nd-millennium BCE and throughout the 1st-millennium CE the pre-mathematical astronomical series Mul.Apin and astrological series Enuma Anu Enlil exercised great influence outside Mesopotamia. The influences could have been exerted anytime between the Early Assyrian Period and the Hellenistic Period (that is, in the Assyrian, Persian, and Hellenistic Periods).
Presently no satisfactory explanations have been put forward concerning the date, method, and means through which knowledge of Mesopotamian astronomy and astrology was transmitted to numerous countries throughout the known world. The activity of isolated individuals such as Berossus and Hipparchus is seen as an inadequate explanation to account for the amount of Mesopotamian material transmitted both westwards and eastwards, and its accuracy and popular acceptance. The extent of the Mesopotamian astronomical and astrological material transmitted is held to attest to a far more intensive transmission - likely closely connected with the media through which the knowledge was communicated. Prior to the recovered papyrus material from Roman Egypt almost nothing is contained in texts regarding how the diffusion of Mesopotamian astronomical and astrological knowledge was accomplished. (The Aramaic language is a possibly for transmission to Judaea. After Alexander the Great conquered Mesopotamia cuneiform texts were translated into the Greek language.)
Mul.Apin type astronomy was very popular outside Mesopotamia and this was undoubtedly due in part to the fact that fixed star information constitutes a dominant factor in the Mul.Apin series. Because it was a collection of practical, highly useful astronomical information the Mul.Apin series is believed to have played a significant role in the transmission process. It was the Mul.Apin series that formed the basis for inter-relatedness between astronomical systems in a variety of regions outside Mesopotamia.
The existence of so-called "Graeco-Babyloniaca" clay tablets offers some insight into (at least) late mechanisms of diffusion. There are approximately 20 "Graeco-Babyloniaca" clay tablets which have been dated from the 1st-century BCE to the 2nd-century CE. They have Sumerian and/or Akkadian script on one side and the same text transliterated into Greek on the other. The Hellenistic Greeks established colonies in Mesopotamia - basically in Uruk. (The city of Babylon was not a focus for Greek colonists.). They were interested in temple/cult issues and a number of them married "the locals." Some specialists are willing to speculate that cuneiform (including limited knowledge of Sumerian) to record mundane information might have lasted until the 3rd-century CE. Joachim Oelsner held that the so-called "Graeca-Babyloniaca" were the final form for the preservation of Sumerian and Akkadian texts when knowledge of cuneiform was extinct.
(12) Diffusion and Inter-relatedness of Constellations and Star Names
Babylonian star nomenclature passed down through the ancient Greeks and Romans, then through the Arab-Islamic and Latin (European) astronomers of the Middle Ages, to the present-day. The stars forming the constellation Cetus are an example. The Babylonians called the star forming our modern Western constellation Cetus mul UR.BE (= mul Ur-idim) "raging beast/dog." The Greeks (i.e., Ptolemy) called the constellation (of 19 stars) "the beast." The Arab-Islamic astronomers called the constellation (also of 19 stars) "the Wolf."
The influence of Babylonian star nomenclature on the ancient Greek constellation set is clearly evident.
Classical Greek individual (proper) star names included Sirius, Procyon, Castor, and Pollux. The star Arcturus had its classical name among the Greeks at least by the time of Hesiod. Early Roman (Latin) individual (proper) star names included Arcturus, Bellatrix, Regulus, and Vindemiatrix. Ptolemy's star catalogue listed only 7 (Greek and Latin) individual (proper) star names, in addition to the traditional use of their descriptor locations within the constellations. The 4 Greek names listed in Ptolemy's star catalogue are: Arcturus, Antares, Procyon, and Canopus. The 3 Latin names listed in Ptolemy's star catalogue are: Capella, Spica, and Regulus.
Both the Greeks and Romans called the star Regulus the "kingly star." The name for this star used by the Arabs also meant "kingly star."
Well-known classical Greek and Roman star names were revived during the European Renaissance and their use is continued today. Examples from Ptolemy's star catalogue are: Arcturus (Greek) (α Bootis) and Spica (Latin) (α Virginis). Others (not listed in Ptolemy's star catalogue) include: the Greek proper star names Sirius, Castor, Pollux, and Alcyone; and the Latin proper star names Bellatrix, Mira, and Vindemiatrix. The Latin star name Polaris was introduced in modern times. A number of recent Latin star names exist that derive from translations of Arabic star names during the Middle Ages. Examples are Ancha (θ Aquarii) and Graffias (β Scorpii).
(3) Arab-Islamic World
The breakup of the Roman Empire in the West occurred during the 5th-century CE. Considerable anarchy reigned throughout most of Western Europe until circa 1000 CE. By 1000 CE the Islam religion proclaimed by Mohammed in 622 CE had moved out from the Arabian peninsula and had spread over a large part of the globe.
"The star names used in the classical Islamic world were derived from two distinct sources: (1) the various (non-standardised) names originated by pre-Islamic groups of Bedouins (the nomadic desert Arabs of the Arabic Peninsula) (older body), and the main body (younger group) of indigenous Arabic star/asterism names were probably formed in the period 500-700 CE (prior to the introduction of Islam in the 7th-century CE); and (2) those transmitted from the Greek world. As Greek astronomy and astrology were accepted and elaborated, primarily through the Arabic translation of Ptolemy's Almagest, the indigenous Bedouin star groupings were overlaid with the Ptolemaic constellations that we recognize today." (Islamicate Celestial Globes by Emilie Savage-Smith (1985) Page 114.) "A third set of names derived from the Arabic were bestowals, often ill-based, by early modern Western astronomers even though they had never been used by Arabian astronomers. Most of these names have disappeared. Thuban, alpha Draconis, is an exception." (Early Astronomy by William O'Neill (1986) Page 162.) Both Emilie Savage-Smith and William O'Neill are reliant on the fundamental studies of Paul Kunitzsch. An example of the first category of star names of Arabic origin is Aldebaran from Al-Dabaran. An example of the second category of star names of Arabic origin is Fomalhaut from Fam al-Hut. An example of the third category of star names derived from Arabic is Thuban, alpha Draconis.
(4) Latin Europe
The movement of Arab-Islamic star names into Europe is rather complex. At first, Latin translations were made by European scholars from Islamic-Arabic translations of original Greek astronomical manuscripts. Soon after a number of Arab-Islamic astronomical and mathematical treatises were also translated into Latin. Somewhat later, with the discovery of Greek manuscripts in the Byzantine Empire, important Greek works were translated directly into Latin from the Greek.
A large amount of Arab-Islamic star nomenclature found its way into the Latin (European) astronomy of the Middle Ages. European astronomers and celestial map makers began to use Arabic star names in preference to Latin names circa 12th-century CE. This practice kept on increasing with the increasing ease of European access to Islamic texts and instruments. By the end of the 15th-century the process of European adoption of Arabic star names was essentially complete. The "Arabic" names were retained in the formal, scientific nomenclature until the end of the 19th-century. Due to Arabic influence on Europe during the Middle Ages several hundred stars now have proper names. (These are basically Latinised Arab-Islamic star names.) When Arab-Islamic astronomy reached Europe the Arabic names of stars and constellations were translated into Latin. However, both Latin and Arabic were used as scientific languages in Europe for some considerable time. On European celestial globes each constellation often had its name in Latin, Greek, and Arabic. However, numerous Arabic words in astronomy (including constellation and star names) were simply adopted in Europe without translation. As example: The star name Aldebaran (meaning "the follower"), the star name Rigel (meaning "the foot"), and the constellation name Aquila (meaning "the flyer").
Examples of prominent bright stars with Arabic names include: Altair, Algol, Betelgeuse, Deneb, Rigel, and Vega.
The Renaissance period saw the appearance of philological studies into the history of stellar nomenclature. The focus of these philological studies was the Arabic and Latin names of the medieval period but also included classical Greek and Roman names from a few recovered classical texts. During the Renaissance period (broadly the 200 years between 1400 and 1600), and also the post-Renaissance period (particularly the heyday of celestial mapping in the 17th- and 18th-centuries), European astronomers also searched through the philological studies for new individual star names to apply to the star charts and celestial globes they developed. One such philological work was A learned treatise of globes by the English scholar John Chilmead (Latin edition 1594; English translation 1638). Wilhelm Schickard, the astronomer and professor of Oriental languages at Tübingen, supplied the Arabic letters and star and constellation names for Coelum stellatum Christianum by Julius Schiller (1627). Julius Schiller's Christianised star atlas was a part of the Counter-Reformation attempt to de-paganise the heavens and substitute Judeo-Christian imagery.
The Italian Theatine monk, mathematician, and astronomer Giuseppe Piazzi (1746-1826) introduced nearly 100 new star names (mostly "Arabic") in his Palermo Catalogue published in 1814 (his 2nd star catalogue). These star names were derived by Giuseppe Piazzi from the philological study Tabulae longitudinum et latitudinum stellarum fixarum ex observatione principis Ulugh Beighi (1665) by the English Orientalist Thomas Hyde (1636-1703). The German historian, chronologist, and astronomer Ludewig Ideler (1766-1846) made an important and long-standing (but flawed) contribution to the philological study and historical explanation of Arabic star names. His Untersuchungen über den Ursprung und die Bedeutung der Sternnamen (1809) was used as a basic reference source for over 150 years. The basis of the book was Ideler's translation of the original 13th-century Arabic text Description of the Constellations by the Persian astronomer Al Kazwini, with Ideler's additions and annotations from classical and other sources. Due to the author's additional use of numerous unreliable and mostly secondary Arabic sources the book unavoidably contains numerous errors.
The majority of modern star names in the European languages are corrupt forms of the Arab-Islamic names (mainly due to linguistic adaptation and the inaccuracies of transliteration). About one-third of corrupted star names derived from Arabic (by early modern Western astronomers) had never been used by Arab-Islamic astronomers as star names. Most of these particular star names are no longer in use. An exception is Thurban (alpha Draconis). Most Arab-Islamic star names in use are contractions of Arabic terms for "the body part of the constellation figure." The star name Vega is a corrupt form of the Arab-Islamic [al-nasr] al-wdqi, "the swooping [vulture]" which has no counterpart in classical Greek star nomenclature. The star name Denbola is a corrupt form of the Arab-Islamic dhanab al-asad, "the tail of the Lion." This descriptor was exactly the way the ancient Greeks referred to this star.
In a few cases present-day astronomers have even used ancient Babylonian star names. As example: Girtab (θ Scorpii), Nunki (σ Sagittarii). Originally Nunki was the Babylonian name for the star Canopus.
(13) Origin of Constellations and Star Names in Latin Europe
(1) Ptolemy's Star Catalogue
The earliest Western star catalogue (as we understand the term) originated with Ptolemy. The culmination of Greek establishment of constellation (and star) names was contained in (Book VII and Book VIII) of Ptolemy's Almagest written circa 140 CE. In it Ptolemy listed 1025 (fixed) stars. (The Almagest contained no star maps.) The scheme was Aratean in origin.
The constellation list in Ptolemy's star catalogue standardised the Western constellation scheme. The constellation scheme described by Ptolemy consisted of 21 northern constellations, 12 zodiacal constellations, and 15 southern constellations.
The northern constellations: (1) Little Bear, (2) Great Bear, (3) Dragon [Draco], (4) Cepheus, (5) Ploughman, (6) Northern Crown, (7) Kneeler [Hercules], (8) Lyre, (9) Bird [Cygnus], (10) Cassiopeia, (11) Perseus, (12) Charioteer (Auriga], (13) Serpent Holder, (14) Serpent [Serpens], (15) Arrow, (16) Eagle, (17) Dolphin, (18) Forepart of Horse [Equuleus], (19) Horse, (20) Andromeda, (21) Triangle.
The zodiacal constellations; (1) Ram, (2) Bull, (3) Twins, (4) Crab, (5) Lion, (6) Virgin, (7) Scales [Claws], (8) Scorpion, (9) Archer, (10) Goat-horned, (11) Water-pourer, (12) Fishes.
The southern constellations: (1) Sea-Monster [Whale], (2) Orion, (3) River, (4) Hare, (5) Dog [Greater Dog], (6) Dog's Forerunner [Lesser Dog], (7) Argo, (8) Watersnake, (9) Bowl, (10) Raven, (11) Centaur, (12) Beast [Wolf], (13) Censer [Altar], (14) Southern Crown, (15) Southern Fish.
The classical Greek constellation set has proved resistant to change. Ptolemy's Almagest, and its star catalogue, became dominant and influential for many centuries both in the Islamic world and in Western Europe.
(2) Transmission of Aratean Constellation Figures
Depictions of the constellations formed virtually the only uninterrupted iconographical tradition from classical Antiquity through to the Renaissance. With the introduction of Arabic astrology in western Europe in the later Middle Ages, images of the 7 planets and other astronomical constructs/tools, such as decans and paranatellonta, were added to this inventory as well.
The Aratean-based illustrative representations of the constellations were established by the end of the Roman Empire. (Artistically, Aratean constellation imagery can be traced to the Atlas Farnese dated to the 2nd-century BCE. On the globe held by Atlas the images of the constellations appear minus their stars.) These were subsequently modified by illustrators in the Byzantine, Islamic, and Carolingian traditions. Knowledge of Greek culture and texts was lost to Western Europe by the early middle ages (the start of which is dated from the fall of Rome in 476 CE). The Classic Aratean tradition of constellations and constellation illustration was revived in Western Europe during the Carolingian period (circa 8th-century CE to circa early 11th-century CE). The Carolingian Renaissance peaked with the rulers Charlemagne and Louis the Pious in the 8th- and 9th-centuries. (There was an increase in the arts, literature, liturgical, and scriptural studies.)
In the Carolingian world, however, the Latin versions of the Phainomena of Aratus were treated primarily as literary sources. They were produced primarily for non-technical general interest. They were treated as catalogues of constellation names and constellation stories. The constellation figures that accompanied Carolingian Aratea manuscripts (1) usually did not accurately reproduce the proper positions of the individual stars in each constellation in accordance with the text; and (2) very often failed to reproduce the correct number of stars in each constellation in accordance with the text.
Under Charlemagne there was a deliberate classical revival in almost every cultural field (scholarship, literature, art, and architecture). The court of Charlemagne in Aachen systematised astronomical learning (and revived other aspects of classical knowledge). (The impetus was Charlemagne's claims to the imperial status of Roman emperors and his extension of Carolingian power into Italy. Charlemagne's quest was to establish a legacy of greatness that connected back to the Holy Roman Empire. As much as anything his efforts to revive the title of Latin Emperor had its basis in the vast realm that he reigned over. With his coronation on Christmas day 800 CE as Holy Roman Emperor, Charlemagne laid claim to his succession to the Roman emperors of antiquity, and indeed, to the classical past.) The Carolingian revival lasted for approximately 100 years, spanning the 9th-century, and largely involved the recovery, mostly from Italy, of as many classical scientific and literature texts as could be found.
Most of the classical Latin works that have survived were preserved through the copying efforts of Carolingian scholars (monastic schools and scriptoria (centres for book copying) throughout Francia (Western Europe). Most of the earliest manuscripts available for ancient texts are Carolingian. (Possibly most of the (surviving) recovered texts came from the city of Ravenna as it had remained a political and cultural power into the 6th-century CE. Charlemagne conquered North Italy and established himself as master of Rome.) Carolingian illuminators referenced classical styles and mythological meaning and carefully reproduced the classical constellation figures. (Charlemagne's scribes were responsible for copying more than 7,000 manuscripts that would otherwise have been lost.) However, the Carolingian illustrations of the constellations lack accuracy in their relationship to each other and consistency in terms of their projection.
(3) The Introduction of Arab-Islamic Star Names into Latin Europe
Ptolemy's Almagest was first translated into Arabic circa 827. The first competent (clear), thorough, non-mathematical (descriptive) summary of Ptolemy's Almagest into Arabic was carried out by the Egyptian astronomer and geographer Abd al-Abbas al-Farghani. Its title was Elements of Astronomy and was written in the period between 833 and 857. Al-Farghani was born in Farghana (present-day Fergana), Uzbekistan, and died in Egypt. He was a member of the House of Wisdom established by the Abbasid Caliph al-Ma'mūn in the 9th-century. The House of Wisdom in Baghdad became the centre for both the work of translating and of research.
The retransmitted Latin translation of Ptolemy's Almagest by Gherardo of Cremona in the 12th-century began the distorted use of Greek-Arabic-Latin words that appear in modern lists of star names. In Greek astronomy the stars within the constellation figures were usually not given individual names. (An exception was made for a few of the brighter stars.) Ptolemy did not identify the stars in his catalogue with Greek letters, as is done by modern astronomers. Each of the 1025 stars listed by Ptolemy (Book VII and Book VIII of the Almagest) was identified (1) descriptively by its position within one of the 48 constellation figures; then (2) by its ecliptic latitude and longitude; and then (3) its magnitude. When the Arabic astronomers translated Ptolemy's Almagest, and adopted the Greek constellations, they also applied their own star names to the listed stars. Beginning with Gherardo, when the Arabic texts of the Almagest were translated into Latin, the Arabic star names were retained but were frequently translated in a corrupted form. The medieval European astronomers adopted the system of using individual (Arabic) star names in their uranography. Hence the star names we use today were essentially introduced by the medieval European translators of Arabic texts of Ptolemy's Almagest, the translation from Arabic to Spanish of al-Sufi's Book of the constellations of the Fixed Stars (Kitab suwar al-kawākib), and also by the introduction of hundreds of Arabic astrolabes into Europe during this period.
The principal channel for the recovery of the Almagest in Western Europe was the Arabic to Latin translation by Gherardo of Cremona. It was made at Toledo using several Arabic versions and completed in 1175. It was widely circulated in manuscript copies before appearing as a printed book in 1515. (The European printing press was invented by Johannes Gutenberg in 1440.) Gherardo's translation was the only version of Ptolemy's Almagest known in Western Europe until the later discovery of copies of the original Greek texts and their translation into Latin texts in the 15th-century. However, it was very literal and hard to follow. (Some translations from the Greek text were, however, made in medieval times. Ptolemy's Almagest in the original Greek continued to be copied and studied in the eastern (Byzantine) empire. Some years earlier to Gherardo's translation, circa 1160, a very literal translation of Ptolemy's Almagest was made directly from the Greek text into Latin by an unknown translator in Sicily. However, this particular version had little circulation and made no effect. The copy in the Vatican library came through the great Florentine book collector Coluccio Salutati.) In the 15th-century European scholars, first George of Trebizond and then Johannes Regiomontanus, independently translated Ptolemy's Almagest from copies of the original Greek text.
What resulted in Europe was a polyglot system of Greek constellations with Latin names containing stars with (largely) Arabic titles.
(4) The Replacement of Aratea by Michael Scotus
The works of Michael Scotus on the constellations, and his manner of illustrating them, caused a lengthy eclipse of Aratea during the latter Middle Ages.
Prior to the mid 15th-century star maps tended to be used to illustrate text in books. Free-standing celestial images were quite rare (and accuracy was usually sacrificed for art). During the Middle Ages pictures appeared illustrating the individual constellations. In these illustrations the classical constellations were separated from the celestial globe and also the individual constellation stars were often omitted. (The astrologer Michael Scot (Scotus), a contemporary of Peter of Abano (circa 1250-1310), included constellation figures in the margins of his 2-volume book on astronomy/astrology.) In the high Middle Ages, unlike the previous periods, the ancient constellation figures were transformed by illuminators to an almost unrecognisable degree. Traditional (classical) constellation representation (per the pseudo-classical Carolingian forms) was influenced by Romanesque and Germanic (Gothic) forms (and also Graeco-Arabic forms). (The end result was the classical subject matter was divorced from its classical form.) The height of this transformation of classical constellation representation occurred during the 13th-century.
In England the school of astrology under the leadership of the mathematician, philosopher, and scholar Michael Scotus (Scot) (born circa 1175 - died circa 1234) replaced the Aratean tradition almost completely. His book Liber de signis (containing a section on the constellations) set out a new set of constellations that differed from the set of 48 Ptolemaic constellations. Others imitated his new scheme of constellations. For example he was followed by Bartholomew of Parma in his Breviloquium de fructu tocius astronomie. (Bartholomew of Parma flourished circa late 13th-century and early 14th-century. Parma is a city in the Italian region of Emilia-Romagna.) This new constellation set appears to have originated from 12th-century CE elaborations of literal translations of Islamic-Arabic texts (on astrology).
For his new constellations Michael Scotus borrowed from Arab-Islamic images of the constellations that had their origins in the Sphaera Barabarica. (Especially the decans and paranatellons.) The art historian Fritz Saxl showed that the representations of the planetary gods in the works of Michael Scotus can be traced back through Arab-Islamic sources to ancient Babylonian sources. Basically, the Arab-Islamic figures of the planets reflect the Babylonian gods: Nebo (= Mercury), Ishtar (= Venus), Ninib (= Mars), Marduk (= Jupiter), and Nergal (= Saturn). The transmission of an uninterrupted textual transmission was made possible by the survival, in certain isolated districts of Mesopotamia, of groups that invoked the Babylonian planetary gods and venerated their images, such as the Harranite Sabeans. Planetary illustrations in Arab-Islamic manuscripts match the planetary effigies which adorned Harranite sanctuaries. (The Harran region encompassed southeastern Anatolia and northern Syria.) The Arab-Islamic book the Picatrix, which was an essential intermediary in the transmission of Babylonian planetary figures, was a translation of the 11th-century CE book on magic, the Ghâya. The Picatrix, likely written circa 1200 CE, was translated in to Latin and was well known in Western Europe. The book had a major influence on magical thinking in Western Europe, especially from circa 1400 to circa 1600.
The illustrations introduced by Michael Scotus were an attempt at adaptation and fusion; an effort to make European forms out of the astral gods/goddesses of ancient Babylon. In this they shown the influence of Romanesque and Germanic (Gothic) forms.
Michael Scotus, who (as the surname signifies) was born in Scotland, lived mostly in France, Spain, and Sicily. In 1230 he visited Oxford, England where he had spent time studying as a young student. In his illustrations of the constellations he combined Graeco-Arabic and mythological imagery with Latin Aratean tradition. His illustrations of constellations supplanted the classical types of the Carolingian tradition. His work on the illustration of the constellation figures was very influential until the Renaissance period. Also, Michael Scotus undoubtedly had access to earlier, popular star lore. (During the Middle Ages in Western Europe classical mythological subjects were not usually represented within the limits of the classical style. The artistic forms under which classical concepts were continued during the Middle Ages were utterly different from the classical style.) During the Gothic period in Europe (circa 1100-1450 CE) there was a disinterest in illuminated astrological manuscripts.
It is probable that Michael Scotus (a polymath) was the finest intellect at the court of Emperor Frederick II (1194-1250) in Palermo, Sicily. He had gone there circa 1200 in the role of "court astrologer" after being enticed by the Norman king Frederick II to join his court in Sicily. (There is little evidence for Fredrick II having an interest in astrology. The title of Imperial Astrologer was given to Michael Scotus in the colophon to his Astronomia.) He then left (circa 1209) to work at the great Arab translation centre in Toledo (Spain) and then returned again to Sicily circa 1220. On his return he gave his attention to science and medicine. He remained there until his death. Though Frederick II was the ruler of both Germany and Sicily he preferred to live in Sicily. In 1220 he acquired the title of Emperor of the Holy Roman Empire. It was in Sicily at this period that tolerance enabled the coexistence of European and Arab scholars.
The astrological text written by Michael Scotus (and containing his illustrations of the constellations) was widely copied throughout the late medieval period. However, by 1500 this text seems to have become somewhat forgotten. His texts include: Liber introductorius, Liber particularis, Liber phisionomie [the short title of this medical treatise is: Physionomia], and Liber de signis. In his two later books which followed Astronomia, the Liber introductorius and the Liber particularis, he set out a popular exposition of both astrology and astronomy. The extraordinary increase in the prestige of astrology in Western Europe in the late Middle Ages was due to the introduction of Arab-Islamic philosophy and science into Sicily and Spain.
(5) The Reintroduction of Aratea by Albert Dürer
The Renaissance period (at its height circa 1450-1550 CE) saw the reinstatement of the Aratean tradition of constellation illustration. (This could be described as the intention to reinstate 'mythological correctness.' This saw illustrators and artists turn to the pre-gothic period - to models closer to Graeco-Roman classical antiquity.) The Renaissance period saw a search for order amongst multiple transmissions of astrological works. Many illustrators began altering the non-classical constellation figures, such as those found in the astronomical/astrological manuscripts of Michael Scotus, with representations that looked more classical. (During the 15th-century CE German artists once again began to copy Carolingian manuscripts. An example of a relatively pure source of classical forms were the illustrations in the Carolingian copy of the 'Roman 'Calendar of 354.') During the Renaissance astronomical manuscripts obtained from Sicily provided an absolute standard for the illustration of the constellations. The process of importing constellation figures (with classical features) from Italy to Germany had begun circa 1450 CE. These constellation figures were beginning to be incorporated into stars charts produced in Germany prior to Dürer's constellation figures being developed.
Ultimately, the depictions of the constellations of post-Renaissance Europe derive from the constellation figures of the artist and engraver (printmaker) Albrecht Dürer (1471-1528). Albrecht Dürer was a native of Nürnberg (Nuremberg), Germany. (His father was Hungarian.) In 1515, in cooperation with Johannes Stabius and Conrad Heinfogel, he produced the first (scientifically rigorous) printed star charts (and they are considered the first modern star charts). The northern star chart (planisphere) was titled Imagines Coeli Septentrionales and the southern star chart (planisphere) was titled Imagines Coeli Meridionales. These mapped the constellations and the key stars of both the northern and southern heavens quite accurately. The constellation figures were portrayed in a classical style and this was followed by later European star chart makers. Dürer was an artist - not an astronomer. The constellations are depicted from the point of view of an external observer looking in towards the earth. It appears a key influence on Dürer were the depictions of constellation figures on Arab-Islamic celestial globes. (Because the Arab-Islamic constellation figures were neither Classical nor contemporary European the Latin illustrators basically ignored them and simply followed the text-descriptions of the constellations to make contemporary images.) On both of Dürer's sky maps (planispheres) the classical constellation figures appear (except Lyra) with their classical attributes correctly drawn.
The production of the star maps were the result of close cooperation between Johannes Stabius (mathematician and cartographer of Vienna in Austria), Conrad Heinfogel (astronomer), and Albrecht Dürer (artist). The accuracy of the star positions was due to Johannes Stabius and Conrad Heinfogel who plotted the star positions. The star chart projection was designed by Johannes Stabius and he also determined the stellar coordinates. The stars were placed by Conrad Heinfogel who calculated their positions on the maps. The constellations were drawn by Albrecht Dürer, who was the key influence on the constellation figures used. The star maps were ordered by Johannes Stabius who demanded they be made on the basis of a manuscript from 1503 written by Conrad Heinfogel (and others).
These star maps were reprinted numerous times (and Dürer's style was copied by numerous 16th-century star map makers) and the star charts were disseminated throughout 16th-century Europe. They were innovative for the 16th-century in combining accuracy of star placement with classical constellation figures. Both star maps were produced under the patronage of Emperor Maximilian I. (The coat of arms for Emperor Maximilian I appears in the top left-hand corner.)
(6) The Great European Star Atlases
After Albrecht Durer published the first rigorous celestial charts in 1515 numerous other persons in Europe published accurate detailed star atlases.
There were 4 great European star atlases: Uranometria (1603) by the German lawyer and uranographer Johann Bayer (the first popular star atlas); Uranographia (1690) by the German brewing merchant and astronomer Johannes Hevelius; Atlas Coelestis (1729) by the English astronomer John Flamsteed; and Uranographia (1801) by the German astronomer Johann Bode.
From circa 1850 a key purpose of star catalogues has been to improve astrometric (positional) accuracy beyond being a simply inventory of stars to a given magnitude level.
(14) European Constellating of the Southern Sky
The charting of the Southern Hemisphere created the need for new constellations. The 48 classical constellations of the Greeks did not map the entire celestial sphere. Until the end of the 16th-century CE European star charts contained only the 48 constellations canonised by Ptolemy in the 2nd-century CE. The stars of the southern sky which did not rise above the horizon of the ancient Greeks remained un-constellated on European celestial maps until the European voyages to the southern hemisphere in the 16th-century. The 16th-centuy has been termed the Age of Exploration. During the 16th- and 17th-centuries the Dutch, French, and English (and Spanish, Portuguese, and Italian navigators) made numerous voyages of discovery to the southern hemisphere. The result is the origin of the constellations surrounding the South Pole is involved in some obscurity.
The process of constellating the southern celestial sky was begun by Petrus Plancius. He included 2 new southern constellations (Crux (as a separate constellation, the stars of which are given in Ptolemy's Almagest) and Triangulus Antarcticus (Eridanus continued from Ptolemy's 34th star to α Eridani)) on his sky globe published in 1589 and then 2 more (Columba (the stars of which are given in Ptolemy's Almagest) and Polophylax (the figure of a man consisting of 7 stars)) on his sky globe published in 1592. These constellations appeared on his 1594 map of the world (the earliest existing map of the southern heavens) entitled "Orbis terrarum typus de integro multis in locis emendatus Pedro Plancio, 1594." Of the 10 constellations invented by Petrus Plancius 4 are still recognised today.
An influential voyage for the invention and naming of southern constellation on European sky maps was the first Dutch trading expedition of 4 ships which left Holland for the East Indies in 1595.The chief pilot (navigator) on the Hollandia (later on the Mauritius) was Pieter Dirckszoon Keyser (circa 1540-1596). The Dutch navigator Pieter Keyser was adept in both mathematics and astronomy and his cooperation to chart the southern sky was sought by Petrus Plancius. Keyser was trained by Petrus Plancius to chart (using an astrolabe or cross-staff given to him by Plancius) the southern stars in the constellation-free zone around the south celestial pole. Probably he mapped the stars of the southern sky from Madagascar and also perhaps near the island of Sumatra. He was apparently assisted in his observations by the Dutch navigator Frederick de Houtman (1571-1627). Keyser died during the voyage (in September 1596) while the trading fleet was at Banten (western Java). When the trading fleet returned to Holland in 1597 his catalogue of 135 stars, divided into 12 newly invented constellations, was given to Plancius. Plancius then added these constellations to his sky globe published in 1598. (Another version (incorrect) is that Plancius used Keyser's data to form 12 new southern constellations and these were added to his 1598 globe.) Petrus Plancius is the likely source for the southern constellations depicted in Johann Bayer's Uranometria.
The 12 southern constellations created were: Apus, Chamaeleon, Dorado, Grus, Hydrus, Indus, Musca, Pavo, Phoenix, Triangulum Australe, Tucana, and Volans. Some are named after exotic birds such as the toucan, peacock, and phoenix.
Initially the new southern hemisphere constellations appeared on a few celestial globes (1598 globe by Petrus Plancius, 1600 globe (some versions state 1599 or 1601) by Jodocus Hondius, and 1603 globe by Willem Blaeu.) Petrus Plancius (1552-1622) was a Dutch theologian and cartographer; Jodocus Hondius (1563-1612) was a Dutch cartographer; and Willem Blaeu (1571-1638) was also a Dutch cartographer. Jodocus Hondius included Petrus Plancius' new southern constellations on the celestial globe he published in 1600.
The first celestial atlas to include the 12 new southern constellations was the Uranometria by Johann Bayer (a German astronomer) published in 1603. Their appearance in plate 49 of Johann Bayer's celestial atlas canonised their acceptance and use. The Uranometria is considered the first great celestial atlas. It contained a separate plate for each of the 48 traditional constellation figures. It was also based on Tycho Brahe's newly determined star positions and magnitudes. (In his atlas Johann Bayer also devised a cohesive system for designating (labeling) the stars. The system of designating individual stars proposed by Johannes Bayer in 1603, and adopted into Western astronomy, comprises the brightest star in a constellation being called alpha, the second-brightest is called beta, and so-on. However, because many stars already had proper names their use continued and still remains popular. As example: alpha Gemini and Beta Gemini are called Castor and Pollux.)
In 1603 Frederick de Houtman published a Catalogue of Southern Stars at the end of his Malay and Madagascan vocabulary, entitled Spraeckende woordboeck Inde Maleysche ende Madagaskarche Talen met vele Arabische ende Turksche woorden. Houtman's catalogue consists of the right ascensions, declinations, and magnitudes of 303 stars. However, 107 stars were already given in Ptolemy's Almagest. The other 196 stars were new discoveries. The astronomer Edward Knobel ("On Frederick de Houtman's Catalogue of Southern Stars, and the Origin of the Southern constellations." (Monthly Notices of the Royal Astronomical Society, 1917, Volume 77, Pages 414-432.)) concluded that Frederick de Houtman had published as his own work the southern sky observations of the recently deceased navigator Pieter Dircksz Keyzer. This conclusion was researched and supported by the astronomer Helen Hogg (Out of Old Books - "Pieter Dircksz Keijser, Delineator of the Southern Constellations." (Journal of the Royal Astronomical Society of Canada, 1951, Volume 45, Pages 215-220)).
Plate 49 of Johann Bayer's Uranometria shows the constellations Phoenix, Hydrus, Tucana, Grus, Indus, Pavo, Apus, Triangulum Australe, Musca, Cham�leon, Volans, and Doradus. Bayer stated that these particular constellations were observed partly by Amerigo Vespucci, partly by Andrea Corsali and Pedro de Medina, but their places were determined by Petrus Theodorus. (In reality Amerigo Vespucci (Sensuyt le nouveau monde et navigations faictes par Emeric de Vespuce (1510)) contributed no constellations. Andrea Corsali (in two letters dated 1517) described the Greater and Lesser Magellanic Clouds, the 5 stars forming the Southern Cross, and 13 other stars which cannot be identified. Pedro de Medina (Arte de navegar (1545) only makes mention to the stars in the Crux (i.e., determining latitude in the southern hemisphere by observations of α Crucis.) In his Celestial globe, published in 1603, Willem Blaeu attributed all of these constellations to Frederick de Houtman. The eminent astronomer and historian Ludwig Ideler gave equal merit to Petrus Theodorus and Frederick de Houtman.
In 1612 Petrus Plancius published a new sky globe and introduced his 2 newly invented southern constellations Camelopardalis and Monoceros.
A later celestial atlas that introduced new constellations was the Firmamentum Sobiescianum by Johannes Hevelius (a German-Polish astronomer) (1611-1687) published posthumously in1690. It was engraved by Johannes Hevelius himself to accompany his catalogue of over 1500 star positions (and the catalogue was also published posthumously in 1690). Seven of the new southern constellations (visible from mid-northern latitudes) invented by Johannes Hevelius , Johannes (1611-1687) are still recognized today. One of the new constellations included Sextens (the sextant) named for one of his own astronomical instruments (and based on the octant, a measuring instrument). He made very accurate stellar coordinate observations without the use of telescopes.
The French astronomer and surveyor Nicolas Louis de LaCaille (1713-1762) invented 14 southern sky constellations which became standard and are still recognized today. The majority of these new constellations were named after new scientific inventions. Following his visit to the Cape of Good Hope (South Africa) in 1750 he introduced them in the Memoires of the Acad�mie Royale des Sciences in 1752 (published in 1756). In his southern star catalogue Coelum Australe Stelliferum, which was published posthumously in 1763) he also introduced the division of Argo Navis into 4 parts, the 4 smaller constellations named Vela (the sail), Pyxis (the compass (but literally "the little box" as there is no Latin word for compass as the Greeks and Romans did not have compasses for navigation)), Puppis (the stern), and Carina (the keel). This constellation change has persisted. (The French cartographer Didier Robert de Vaugondy (1723-1826) became the first to actually illustrate (in 1764) Nicolas LaCaille's 4 divisions of Argo Navis. These 4 constellations became the last new constellations to be officially recognised.
The star atlases produced by the 19th-century cartographers Friedrich Argelander (Uranometria Nova, published 1843), and Benjamin Gould (Uranometria Argentina, published 1877-1879), standardised the list of constellations to those we use today. They both followed Nicolas LaCaille and divided Argo Navis (the ship) (Ptolemy's largest constellation) into 4 parts: Vela (the sail), Pyxis (the compass), Puppis (the stern), and Carina (the keel).
The process of constellation invention was continued by numerous other astronomers of the 17th-, 18th-, and 19th-centuries but these constellations were never officially recognised or adopted and quickly disappeared.
The establishment of constellation sets covering the entire visible sky is not common to early cultures/civilisations. The appearance of elaborate constellation sets as reference systems covering most of the visible sky only originated with the development of complex societies. Complex constellation systems make their earliest appearances in the 2nd millennium BCE in the stable kingships of Mesopotamia, Egypt, and China. In these empires astronomy had become a state supported and state directed enterprise.
Present-day Western constellations, and star and constellation names, originated from a number of Near Eastern and Mediterranean cultures. The ancient Greeks are the main source of present-day Western star/constellation names. They named the most prominent stars and established the most obvious constellations by circa 800 BCE. The Greeks never thought of constellating the entire visible sky until circa the 5th-century BCE. By circa 400 BCE (likely under the influence of Babylonian uranography) the Greeks had, by borrowing and invention, established the majority of the 48 classical constellations. The Romans derived a considerable portion of their star lore and uranography from the Greeks.
The cuneiform evidence recovered since the mid 1800s indicates that Greek uranography borrowed from the earlier Babylonian uranography, established circa late 2nd-millennium BCE. Some late Egyptian influence is also indicated. Also, according to Paul Kunitzsch, the influence of earlier Babylonian nomenclature are sometimes discernable in the body (older group) of (non-standardised) star/asterism names of nomadic desert Arabs of the (pre-Islamic) Arabic Peninsula.
The constellation scheme established in Ptolemy's Almagest remained virtually unchanged until the European era of celestial mapping in the 17th- and 18th-centuries. (The cartographer Kaspar Vopel may have been the first person to add to the list of constellations handed down by Ptolemy. In 1536 he charted the constellations Coma Berenices and Antinous on a celestial globe (the globe still exists). Islamic star mapping mostly followed the Ptolemaic tradition. Ptolemy's star catalogue remained the standard star catalogue in both the Western and Islamic world for circa 1000 years. The dome of a bath house at Qusayr 'Amra, the only remaining building of an Arab palace in Jordan built circa CE 715, contains a unique hemispherical celestial map. The surviving fragments of the fresco show parts of 37 constellations and 400 stars. This celestial map furnishes a connecting link between the classical representations of the constellations and the later Islamic forms.
An additional source of star/constellation names originated with the groups of nomadic desert Arabs of the (pre-Islamic) Arabic Peninsula. In pre-Islamic times the early Bedouin Arabic people (i.e., the nomadic desert dwelling tribes of the Arabic Peninsula) gave individual names to the numerous stars. According to Paul Kunitzsch the influence of earlier Babylonian nomenclature are sometimes discernable in this body (older group) of (non-standardised) star/asterism names. Paul Kunitzsch also holds that the main body (younger group) of indigenous (pre-Islamic) Arabic star/asterism names were probably formed in the period 500-700 CE. The folk tradition of Arabic star names was preserved by later Arab-Islamic astronomers. This has ultimately influenced the naming of individual stars in Western constellations.
Whilst our inherited constellation names are basically Greek our European inherited star names are largely due to the influence of medieval (Arabic) Islamic astronomy on medieval European astronomy. The influence of Arabic names on Western star names dates from around the 10th-century AD when Arab astronomy flourished. (The Arabs (correctly Arab-Islamic astronomers) increased the number of individual star names. Most individual star names were introduced by al-Sufi when he published his own version of Ptolemy's Almagest in the 10-century CE.) After the demise of the Roman Empire most Greek scientific works were translated into Arabic (including Ptolemy's Almagest). Eventually these texts were re-introduced back into Europe (and into Latin and Greek) through Arab Spain. With the Arabs the influence of the Greek language was not very strong in the names of stars and constellations. Modern star names are mostly derived from Arabic translations (or use) of Ptolemy's Almagest, chiefly Shiraz astronomer al-Sufi's 10th-century book Kitab suwar al-kawakib (Book of Constellation Figures), and also the introduction of hundreds of Arabic astrolabes into Europe. Al-Sufi's book Kitab suwar al-kawakib is our best authority for post-Islamic Arabic star-names and constellations. It also included the folk tradition of Arabic star names.
The Renaissance period was the catalyst for their being mixed together and passed down to present-day in Latin characters. The retransmitted Latin translation of Ptolemy's Almagest by Gherardo of Cremona (Lombardy) in the 12th-century was an Arabic-Latin version. This began the distorted use of Greek-Arabic-Latin words that appear in modern lists of star names. It was the only version known in Western Europe until the later discovery of copies of the original Greek texts and their translation into Latin texts in the 15th-century. Commonly used present-day individual star names include: Aldebaran, Algol, Altair, Antares, Arcturus, Betelgeuse, Canopus, Capella, Dened, Fomalhaut, Mira, Pollux, Procyon, Regulus, Rigel, Sirius, Spica, and Vega.
Richard Allen in his highly influential book Star-Names and Their Meanings (1899) stated that European star names came chiefly from the Arabs. Allen, who had no real understanding of Arabic, also concluded that many Arabic star-names were actually translations of Greek descriptive terms transmitted through Arabic into Latin (and from Latin into English and other languages). When the linguist Maio Pei made a check of 183 English star-names he concluded that 125 were from Arabic, and 9 were from Arabic-Latin. (See: Story of the English Language by Mario Pei (1967; Page 225).) Paul Kunitzsch and Tim Smart (A Dictionary of Modern Star Names (2006; Page 11) write: "A statistical analysis of the 254 star names here presented reveals that (counting five double entries only once) 175 names (= 70%) are Arabic and 47 (= 19%) are are Greek or Latin." The modern authority on such matters is Paul Kunitzsch.
Another source of names derived from the Arabic were bestowals, often ill-based, by early modern Western astronomers even though they had never been used by Arabian astronomers. (Some European astronomers inventing their own constellations also invented their own Arabic star names.) The earliest likely example is the Dutch orientalist and mathematician Jacob Golius (1596-1667). Most of these names have disappeared. Thuban, alpha Draconis, is an exception.
The constellation scheme established in Ptolemy's Almagest remained virtually unchanged until the European era of celestial mapping in the 17th- and 18th-centuries. During this period astronomers added their own constellation inventions to the remaining gaps left in the sky. There was no agreed standardised set of constellations. (One celestial atlas had 99 constellations.)
(1) Definitive Establishment of Constellations and Constellation Boundaries
Historically, the irregularity of the constellation figures explains the irregularity of their boundaries. Constellation schemes and boundaries remained unregulated until the early 20th-century. (Generally, celestial atlases in the early 20th-century varied between 80 and 90 constellations. Constellation boundaries also varied from atlas to atlas.) Until the 1920s astronomers used irregular curved boundaries (wavy-line boundaries) to demarcate the constellation areas. The issue of (1) the number of constellations, and (2) their boundaries, was taken up by the International Astronomical Union (IAU) in 1922. (The IAU was founded as a professional body for astronomers. Its purpose is to promote and safeguard standards in astronomy through international cooperation.) In a series of resolutions beginning in 1922 and ending in 1930, the International Astronomical Union (IAU) effected the division of the celestial sphere into 88 precisely defined constellations, complete with official spelling of names and use of abbreviations. In 1922, at its first General Assembly, the newly formed International Astronomical Union (established 1919) officially adopted and regularised 88 official constellations (or at least took up the issue), and in 1928(9?) defined their boundaries. In 1930 the Belgian astronomer Eugene Delporte (1882-1955) was commissioned by the International Astronomical Union (IAU) to create boundaries for all the constellations. The IAU instructed Delporte to follow, as far as possible, the divisions which appeared in the principal celestial atlases then in use. Acting at the request of the International Astronomical Union the Belgian astronomer Eugène Delporte (1882-1955) then proceeded to draw up the definitive modern boundaries for these 88 constellations. His work on the demarcation of the constellations was published in his book Délimitation scientifique des constellations, cartes (1930, 2 Volumes), with texts, maps, and celestial atlas. For the first time delimitation of constellations was fixed for the whole of the sky. The boundaries between the constellations were fixed along lines of right ascension and declination for the epoch 1875. (The boundaries between constellations were defined by arcs of hour circles and parallels of declination for a specific reference date, the equinox of 1875. This enables a simple adjustment for precession that enables the right ascension and declination of any star on any date.) Basically the constellation boundaries became rectangular borders. This made the use of traditional constellation figures obsolete. (An effect of Delporte’s scheme on Flamsteed catalogue star names was some stars were now located in different constellations. As examples: 49 Serpentis is in Hercules, and 30 Monocerotis is in Hydra.) A transition to non-pictorial star maps had taken place with the 1928(9?) IAU decision on constellation boundaries.
Constellations are now defined by their boundary lines (rectangular borders), not by their historic figures. Instead of being star patterns they are now precisely defined areas of the sky. This ensures that constellations now completely cover the sky and all stars lie within the boundary of a constellation. Interestingly, precession is causing the constellation boundaries to tilt. (The boundaries of constellations marked on star atlases/charts are entirely artificial. Due to the absence of any other authoritative work on constellation boundaries Delaporte's book established the definitive system to which further changes could not be made.) Enclosed within modern constellation boundaries are both the stars in the traditional constellation figures and the neighbouring stars outside the figures. The 88 official constellations selected by the International Astronomical Union were all of European origin simply because the wide use of these constellations was already well established.
The use of proper names for stars has decreased since the 19th-century when astronomers adopted a more systematic way of identifying stars (Bayesian designation, right ascension and declination).
Appendix 1: Forms of (Star and) Constellation Names
The Latin names we use present-day for the constellations are inherited from Renaissance use. Each Latin constellation name has two forms: the nominative, for use when you're talking about the constellation itself, and the genitive, or possessive, which is used in star names. For example, Hamal, the brightest star in the constellation Aries (nominative form), is also called Alpha Arietis (genitive form), meaning literally "the Alpha of Aries." The IAU also adopted three-letter abbreviations of the constellation names at its inaugural General Assembly in Rome in 1922. So, for instance, Andromeda is abbreviated to And whilst Draco is abbreviated to Dra. This system of abbreviation is convenient when space is at a premium. Alpha Arietis is written α Ari, using the lower-case Greek letter alpha and the abbreviation for Aries.
"Most constellation names are simple common nouns with obvious English equivalents. For instance, Leo is Latin for "the lion" or "a lion." The Greeks sometimes tried to associate the constellation Leo with some particular lion from their mythology, but there's every reason to believe that when they inherited this constellation from Mesopotamia, it was just a generic lion. Or, more precisely, the great celestial Lion — the Lion that Lives in the Sky. Other constellations are named after specific people or things. For instance, Eridanus is one particular mythological river, not the Latin equivalent of "a river" or "the river." The constellation Perseus is often nicknamed the Hero in English, but this is a little misleading, as that nickname could apply equally well to Hercules. Not surprisingly, there are plenty of intermediate cases. Thus, Cetus means just a sea monster, whale, or large fish, but it's very likely that the constellation's inventor was thinking of the particular monster that tried to eat Andromeda. And Gemini is the common Latin word for "twins" but also the special epithet of the mythological twins Castor and Pollux." ("Constellation Names and Abbreviations." by Tony Flanders (On-line Sky and Telescope, http://www.skyandtelescope.com/howto/Constellation_Names.html.)
Appendix 2: Modern Use of Star Names
In professional publications it is still usual to use the proper names of all 1st-magnitude stars – and other bright stars – historically visible from mid-northern latitudes and a few special cases. These star names are: Achernar, Aldebaran, Altair, Antares, Arcturus, Betelgeuse, Canopus, Capella, Castor, Deneb, Fomalhaut, Polaris, Pollux, Procyon, Regulus, Rigel, Sirius, Spica, Vega.
In amateur publications it is not unusual for the proper names of other bright stars (that are either close to 1st magnitude, occupy important locations stick figure depictions of constellations, or have special properties) to be used. These star names include: Albireo, Alcor, Alcyone, Algol, Almach, Alphard, Alpheratz, Bellatrix, Denebola, Elnath, Enif, Izar, Kochab, Merope, Mira, Mirach, Mirfak, Mizar, Vindemiatrix. As examples: Albireo is a famous double star and Algol is a famous variable star.
Appendix 3: Pronunciation of (Star and) Constellation Names
In the early 1940s a 3 person expert committee was established by the American Astronomical Society (AAS) (the major professional organization in North America for astronomers, other scientists and individuals interested in astronomy) to establish a uniform/standardised pronunciation of star and constellation names. The report of the committee (Committee of the American Astronomical Society on Preferred spellings and Pronunciations), titled Pronouncing Astronomical Names, on preferred spellings and pronunciations, was approved for publication at the New Haven meeting of the AAS in June, 1942. The AAS officially adopted the new list of pronunciations. The pronunciations in the report are all given in American English. The process was described in the article "Pronouncing Star Names." Science News Letter, Volumes 41-42, 1942, August 22, Page 125. The journal Sky & Telescope published the constellation pronunciations several times, first in the June 1943 issue of the journal and most recently (with some minor modifications) in the article "Designated Authority" by Ed. Krupp, May, 1997, Page 66.
Tony Flanders argues that the AAS report is deeply flawed. "It was inspired by the IAU's standardization of constellation definitions, but that was a very different situation. The IAU reforms were successful because they addressed an urgent need. Newly discovered variable stars are named after the constellation that contains them, and this only works if everyone agrees on the constellation boundaries. There's no comparable reason to standardize pronunciation. Experienced astronomers, both professional and amateur, pronounce constellation names in many different ways but have no trouble understanding each other. Moreover, the pronunciations chosen for the AAS report were somewhat arbitrary. There are several well-defined systems for pronouncing Latin, and the AAS pronunciations don't conform with any of them." ("Constellation Names and Abbreviations." by Tony Flanders (On-line Sky and Telescope, http://www.skyandtelescope.com/howto/Constellation_Names.html).
Appendix 4 Naming Stars Today
The art of giving stars proper names is now virtually redundant. In most cases stars are simply given a numerical descriptor to designate their position in the night sky. This designation is usually associated with a particular star catalogue. These catalogues group stars together by some particular property, or by the instrument that made the initial discovery of radiation from that star in a particular waveband. These star naming conventions are useful when searching for a particular type of star in a particular region of the sky, such as when undertaking research.
Appendix 5: Star Naming Companies
The establishment of the internet has also seen the establishment of a number of star naming companies. They assign personal names to stars and operate as a commercial business in doing so. Upon application and payment of a small fee they will name a star named after a person. They have no official status given to them by any astronomical body within the astronomical community to assign personal names to stars on a fee for service basis. The primary and universally recognized authority on naming stars (and basically all things to do with astronomy) is the International Astronomical Union (IAU). The IAU does not recognize names given to stars by private commercial companies. It is rare for the IAU to designate a star with a proper (personal) name. Most often, the IAU will assign it the name used for that object by an ancient culture, if one is known to exist. In the absence of an early star name being identified for use, important historical figures in astronomy (astronomers or astronauts) are usually chosen to be honored in this way.
Return to top of page.
This web page was last updated on: Saturday, April 20, 2013, 4.30 pm.
This web page was created using Arachnophilia 4.0 and FrontPage 2003.
You can reach me here by email:
Return To Site Contents Page | http://members.westnet.com.au/Gary-David-Thompson/page9m.html | 13 |
15 | In a computer algebra setting, the greatest common divisor is necessary to make sense of fractions, whether to work with rational numbers or ratios of polynomials. Generally a canonical form will require common factors in the numerator and denominator to be cancelled. For instance, the expressions
-8/6, 4/-3, -(1+(1/3)), -1*(12/9), 2/3 - 2
all mean the same thing, which would be preferable to write as -4/3 (that is, a single fraction of the form z/n, z an integer
, n natural
Some useful properties of the gcd
Suppose we are working with a suitable ring
R (for instance, the integers, or the rational polynomials). Then the 'greatest' (in the sense of magnitude
for numbers, or degree
for polynomials) divisor
of an element
r is clearly itself (1.r=r). Furthermore, for any element r of R, 0.r=0 so r is a divisor of 0. Hence a first observation:
For any r, gcd(r,0) = r.
Next up, it's completely trivial
that if c is the gcd of a and b, it is the gcd of b and a. That is,
Letting c be any divisor, there must be α and β such that a=αc and b=βc (otherwise, c isn't a divisor!). So we observe that a - b = αc - βc =(α-β)c. That is,
If c divides a and c divides b, then c divides a-b.
Then, since division
can be thought of as repeated subtraction
, we deduce
If c divides a and c divides b, and if a≥b, then c divides the remainder of a/b. (for non-zero b).
Equipped with just these four simple facts, you might be able to determine a process for finding the greatest divisor of two integers (note that this is defined to be a natural number.) If you can't, the work has conveniently already been done by Euclid
, some 2 millenia ago. Despite its age, Euclid's algorithm
remains the best method for this task, and the so called extended euclid algorithm
even allows you to express the gcd in terms of the original integers, as dialectic
's writeup above describes.
To prove that expertise in this area has
in fact progressed in the past two thousand years, the rest of this writeup considers the problem of finding the gcd of two polynomials (in a single variable
). This is equivalent to finding their common root
s, meaning that gcd calculations can be applied to solving systems of equations
A natural first approach is to refine Euclid's algorithm as devised for number
s to work on polynomials. It is indeed possible to create such an algorithm. However, one of two problems arises. Either you are forced to work with fractional coefficient
s (awkward) or a fraction-free
approach is employed by rescaling
- which causes the size of coefficients in intermediate
expressions to skyrocket (again awkward). Non-euclidean techniques can be devised which call for a euclidean gcd algorithm only in circumstances where these two pitfalls can be avoided. But before discussing these, there is an approach along Euclidean lines which works somewhat better than a naive re-scaling version of Euclid's algorithm.
GCDs, Resultants, and the Sylvester Matrix
First, some definitions. For polynomials P = an
+ ... + a0
and Q = bm
+ ... + b0
, whose roots are the sets αi
respectively, we define the resultant
of P and Q as
r(P,Q) = Πi=1:nΠj=1:m (βj-αi)
(That is, the product of all possible differences of roots)
Then the Sylvester Matrix
of P and Q is an (m+n)X(m+n) square matrix generated by m copies of the coefficients of P and n copies of those of Q, shifted right each row and padded by zeros:
an an-1 ... a0 0 ... 0
0 an ... a1 a0 ... 0
. . .
. . .
. . .
0 0 ... 0 an ... a0
bm bm-1 ... b0 0 ... 0
0 bm ... b1 b0 ... 0
. . .
. . .
. . .
0 0 ... 0 bm ... b0
It is a remarkable result that the determinant
of this matrix is precisely the resultant of P and Q. Note that the resultant will be zero iff
P and Q have a common
root- in which case, they have a non-trivial
greatest common divisor.
Hence, gaussian elimination can be applied to the Sylvester Matrix, analagous to Euclid's algorithm. If the result is non-zero, the polynomials are co-prime. If a result of zero is obtained, then there is a gcd of interest- with a bit more work, an algorithm can keep track of the cancellations that occur during elimination and in this way the common factors determined. Finally, a system of rescalings exists which allows for fraction-free calculation whilst generating coefficients with predictable common factors, which may therefore be efficiently cancelled. This entire technique is encapsulated in the Dodgson-Bareiss algorithm, and it represents the most efficient way to find the gcd along euclidean lines.
Non-Euclidean techniques- modular GCD calculation
We have seen that polynomial gcd calculation is possible in a fraction-free manner, at the price of intermediate expression swell
, which, although reduced by the Bareiss algorithm, can still be horrific. A useful trick would be to bound
the size of those intermediate expressions, and this is generally accomplished by the use of modular
methods- working modulo
a small value such that all expressions fall in a range 0...n or -p...p say.
Two (de)motivating Examples
Sadly, the simplification power of modular mathematics can often erase interesting
information. There is a further complication with gcds, which is that coefficients of the answer may be greater than any and all of the coefficients present in your original polynomials. Here are some examples that demonstrate these concerns directly.
Problem 1: Consider P = x3 + x2 - x -1 and Q = x4 + x3 + x + 1. Here all the coefficients are 0 or 1. However, when you factorise P and Q you observe P = (x+1)2(x-1) and Q = (x+1)2(x2 - x - 1). So their gcd is (x+1)2 which expands as x2 + 2x + 1- we have obtained a larger coefficient. Examples of this form can be created to generate an arbitrarily large coefficient.
Problem 2: Let P = x-3 and Q = x + 2. Then these are clearly coprime- so their gcd is 1. Yet working modulo 5, Q = P so we have a non-trivial gcd of x + 2, of greater degree than the 'true' gcd.
Solutions to these problems
The first issue, of unexpectedly large coefficients, can be fairly easily resolved- there is an upper bound
, the Landau
bound, on the absolute value
of coefficients in the gcd- generated by a fairly ugly formula involving the degrees of the polynomials and their coefficients. For a precise description, see the literature referenced at the end of this writeup; its existence is sufficient for the discussion that follows.
The second problem needs more work to resolve, but is easy to describe. Given polynomials P and Q, we observe (without proof) that
- degree( gcd((P mod n),(Q mod n)) ) ≥ degree( gcd(P,Q) )
- Equality holds in the above only if gcd((P mod n),(Q mod n)) = (gcd(P,Q)) mod n. That is, if the modular image of the gcd is the gcd of the modular images.
- The above will fail only if n divides the resultant of P/G, Q/G, where G is the true gcd of P and Q.
- The resultant is finite, so has only finitely many prime factors. Hence, if we chose n to be prime, then only finitely many choices of n will be 'bad'.
A 'bad' gcd will be immediately apparent- it won't actually divide
both P and Q! On the other hand, a gcd found by modular means has degree at least that of the true gcd, so if it turns out to be a divisor, it is
the gcd. So if we just keep generating gcds working modulo a prime, eventually we'll exhaust
the bad primes and uncover the true gcd.
Note that in the two algorithms offered below, a gcd calculation is required! This seems circular- but we already have a working yet potentially unwieldy gcd algorithm from the Dodgson-Bareiss approach, which, modulo a prime, won't be so unwieldy after all.
Large prime modular GCD algorithm
Pick p a prime greater than twice the LM (Landau-Mignotte) bound.
Calculate Qp = gcd(Ap,Bp) (where Xp denotes X mod p)
Rewrite Qp over the integers such that its coefficients fall in the range -p/2 to p/2 (that is, add or subtract p from each coefficient that falls outside the LM bound)
If Qp divides A and divides B, Qp is the true gcd.
If not, repeat from start with the next largest prime.
Many small primes modular GCD algorithm
The above single-prime technique could still yield large coefficients if the LM bound is high; and we only determine if it was a 'good' prime at the very end. Using the Chinese remainder theorem
, it is possible to build up to the LM bound in stages, discarding any unlucky
primes along the way. By using successive small primes, no modular gcd calculation will be very difficult, and there need not be many such iteration
s- knowing the answer mod m and n, the Chinese remainder theorem yields an answer mod mn.
Define LM= Landau-Mignotte bound of A,B
p should be chosen so as not to divide
the leading coefficients of A and B.
* If degree C = 0, return 1
NB this assumes monic polynomials;
the point is we have a trivial gcd
While Known ≤ 2LM do
If degree C < degree Result, goto * all previous primes bad!
If degree C > degree Result, do nothing
If degree C = degree Result
Check Result divides A and B. If not, redo from start, avoiding all primes used so far.
Note that an early-exit strategy is possible- if the result after applying the chinese remainder theorem is unchanged, the odds are very good that you have found the true gcd. So the algorithm can be refined by testing for Result dividing A and B whenever this occurs, and halting the loop if successful.
References: CM30070 Computer Algebra, University of Bath- lecture notes, revision notes, and the lecturer's book of the same title. More in-depth information on the specifics of algorithms and bounds can be found in the book, although it is currently out of print. Details at http://www.bath.ac.uk/~masjhd/DSTeng.html | http://everything2.com/?node_id=482506 | 13 |
15 | Many programming languages use ASCII coding for characters (ASCII stands for American Standard Code for Information Interchange). Some recent languages, e.g., Java, use UNICODE which, because it can encode a bigger set of characters, is more useful for languages like Japanese and Chinese which have a larger set of characters than are used in English.
We'll use ASCII encoding of characters as an example. In ASCII, every character is encoded with the same number of bits: 8 bits per character. Since there are 256 different values that can be encoded with 8 bits, there are potentially 256 different characters in the ASCII character set. The common characters, e.g., alphanumeric characters, punctuation, control characters, etc., use only 7 bits; there are 128 different characters that can be encoded with 7 bits. In C++ for example, the type char is divided into subtypes unsigned-char and (the default signed) char. As we'll see, Huffman coding compresses data by using fewer bits to encode more frequently occurring characters so that not all characters are encoded with 8 bits. In Java there are no unsigned types and char values use 16 bits (Unicode compared to ASCII). Substantial compression results regardless of the character-encoding used by a language or platform.
We'll look at how the string "go go gophers" is encoded in ASCII, how we might save bits using a simpler coding scheme, and how Huffman coding is used to compress the data resulting in still more savings.
With an ASCII encoding (8 bits per character) the 13 character string "go go gophers" requires 104 bits. The table below on the left shows how the coding works.
|ASCII coding||3-bit coding|
The string "go go gophers" would be written (coded numerically) as 103 111 32 103 111 32 103 111 112 104 101 114 115. Although not easily readable by humans, this would be written as the following stream of bits (the spaces would not be written, just the 0's and 1's)
1100111 1101111 1100000 1100111 1101111 1000000 1100111 1101111 1110000 1101000 1100101 1110010 1110011
Since there are only eight different characters in "go go gophers", it's possible to use only 3 bits to encode the different characters. We might, for example, use the encoding in the table on the right above, though other 3-bit encodings are possible.
Now the string "go go gophers" would be encoded as 0 1 7 0 1 7 0 1 2 3 4 5 6 or, as bits:
000 001 111 000 001 111 000 001 010 011 100 101 110 111
By using three bits per character, the string "go go gophers" uses a total of 39 bits instead of 104 bits. More bits can be saved if we use fewer than three bits to encode characters like g, o, and space that occur frequently and more than three bits to encode characters like e, p, h, r, and s that occur less frequently in "go go gophers". This is the basic idea behind Huffman coding: to use fewer bits for more frequently occurring characters. We'll see how this is done using a tree that stores characters at the leaves, and whose root-to-leaf paths provide the bit sequence used to encode the characters.
Using a tree (actually a binary trie, more on that later) all characters are stored at the leaves of a complete tree. In the diagram to the right, the tree has eight levels meaning that the root-to-leaf path always has seven edges. A left-edge (black in the diagram) is numbered 0, a right-edge (blue in the diagram) is numbered 1. The ASCII code for any character/leaf is obtained by following the root-to-leaf path and catening the 0's and 1's. For example, the character 'a', which has ASCII value 97 (1100001 in binary), is shown with root-to-leaf path of right-right-left-left-left-left-right.
The structure of the tree can be used to determine the coding of any leaf by using the 0/1 edge convention described. If we use a different tree, we get a different coding. As an example, the tree below on the right yields the coding shown on the left.
Using this coding, "go go gophers" is encoded (spaces wouldn't appear in the bitstream) as:
10 11 001 10 11 001 10 11 0100 0101 0110 0111 000
This is a total of 37 bits, which saves two bits from the encoding in which each of the 8 characters has a 3-bit encoding that is shown above! The bits are saved by coding frequently occurring characters like 'g' and 'o' with fewer bits (here two bits) than characters that occur less frequently like 'p', 'h', 'e', and 'r'.
The character-encoding induced by the tree can be used to decode a stream of bits as well as encode a string into a stream of bits. You can try to decode the following bitstream; the answer with an explanation follows:
To decode the stream, start at the root of the encoding tree, and follow a left-branch for a 0, a right branch for a 1. When you reach a leaf, write the character stored at the leaf, and start again at the top of the tree. To start, the bits are 010101100111. This yields left-right-left-right to the letter 'h', followed (starting again at the root) with left-right-right-left to the letter 'e', followed by left-right-right-right to the letter 'r'. Continuing until all the bits are processed yields
When all characters are stored in leaves, and every interior/(non-leaf) node has two children, the coding induced by the 0/1 convention outlined above has what is called the prefix property: no bit-sequence encoding of a character is the prefix of any other bit-sequence encoding. This makes it possible to decode a bitstream using the coding tree by following root-to-leaf paths. The tree shown above for "go go gophers" is an optimal tree: there are no other trees with the same characters that use fewer bits to encode the string "go go gophers". There are other trees that use 37 bits; for example you can simply swap any sibling nodes and get a different encoding that uses the same number of bits. We need an algorithm for constructing an optimal tree which in turn yields a minimal per-character encoding/compression. This algorithm is called Huffman coding, and was invented by D. Huffman in 1952. It is an example of a greedy algorithm.
We'll use Huffman's algorithm to construct a tree that is used for data compression. In the previous section we saw examples of how a stream of bits can be generated from an encoding, e.g., how "go go gophers" was written as 1011001101100110110100010101100111000. We also saw how the tree can be used to decode a stream of bits. We'll discuss how to construct the tree here.
We'll assume that each character has an associated weight equal to the number of times the character occurs in a file, for example. In the "go go gophers" example, the characters 'g' and 'o' have weight 3, the space has weight 2, and the other characters have weight 1. When compressing a file we'll need to calculate these weights, we'll ignore this step for now and assume that all character weights have been calculated. Huffman's algorithm assumes that we're building a single tree from a group (or forest) of trees. Initially, all the trees have a single node with a character and the character's weight. Trees are combined by picking two trees, and making a new tree from the two trees. This decreases the number of trees by one at each step since two trees are combined into one tree. The algorithm is as follows:
Repeat this step until there is only one tree:Choose two trees with the smallest weights, call these trees T1 and T2. Create a new tree whose root has a weight equal to the sum of the weights T1 + T2 and whose left subtree is T1 and whose right subtree is T2.
We'll use the string "go go gophers" as an example. Initially we have the forest shown below. The nodes are shown with a weight/count that represents the number of times the node's character occurs.
We pick two minimal nodes. There are five nodes with the minimal weight of one, it doesn't matter which two we pick. In a program, the deterministic aspects of the program will dictate which two are chosen, e.g., the first two in an array, or the elements returned by a priority queue implementation. We create a new tree whose root is weighted by the sum of the weights chosen. We now have a forest of seven trees as shown here:
Choosing two minimal trees yields another tree with weight two as shown below. There are now six trees in the forest of trees that will eventually build an encoding tree.
Again we must choose the two trees of minimal weight. The lowest weight is the 'e'-node/tree with weight equal to one. There are three trees with weight two, we can choose any of these to create a new tree whose weight will be three.
Now there are two trees with weight equal to two. These are joined into a new tree whose weight is four. There are four trees left, one whose weight is four and three with a weight of three.
Two minimal (three weight) trees are joined into a tree whose weight is six. In the diagram below we choose the 'g' and 'o' trees (we could have chosen the 'g' tree and the space-'e' tree or the 'o' tree and the space-'e' tree.) There are three trees left.
The minimal trees have weights of three and four, these are joined into a tree whose weight is seven leaving two trees.
Finally, the last two trees are joined into a final tree whose weight is thirteen, the sum of the two weights six and seven. Note that this tree is different from the tree we used to illustrate Huffman coding above, and the bit patterns for each character are different, but the total number of bits used to encode "go go gophers" is the same.
The character encoding induced by the last tree is shown below where again, 0 is used for left edges and 1 for right edges.
The string "go go gophers" would be encoded as shown (with spaces used for easier reading, the spaces wouldn't appear in the real encoding).
00 01 100 00 01 100 00 01 1110 1101 101 1111 1100
Once again, 37 bits are used to encode "go go gophers". There are several trees that yield an optimal 37-bit encoding of "go go gophers". The tree that actually results from a programmed implementation of Huffman's algorithm will be the same each time the program is run for the same weights (assuming no randomness is used in creating the tree).
Huffman's algorithm is an example of a greedy algorithm. It's called greedy because the two smallest nodes are chosen at each step, and this local decision results in a globally optimal encoding tree. In general, greedy algorithms use small-grained, or local minimal/maximal choices to result in a global minimum/maximum. Making change using U.S. money is another example of a greedy algorithm.
Problem: give change in U.S. coins for any amount (say under $1.00) using the minimal number of coins.
Solution (assuming coin denominations of $0.25, $0.10, $0.05, and $0.01, called quarters, dimes, nickels, and pennies, respectively): use the highest-value coin that you can, and give as many of these as you can. Repeat the process until the correct change is given.
Example: make change for $0.91. Use 3 quarters (the highest coin we can use, and as many as we can use). This leaves $0.16. To make change use a dime (leaving $0.06), a nickel (leaving $0.01), and a penny. The total change for $0.91 is three quarters, a dime, a nickel, and a penny. This is a total of six coins, it is not possible to make change for $0.91 using fewer coins.
The solution/algorithm is greedy because the largest denomination coin is chosen to use at each step, and as many are used as possible. This locally optimal step leads to a globally optimal solution. Note that the algorithm does not work with different denominations. For example, if there are no nickels, the algorithm will make change for $0.31 using one quarter and six pennies, a total of seven coins. However, it's possible to use three dimes and one penny, a total of four coins. This shows that greedy algorithms are not always optimal algorithms.
In this section we'll see the basic programming steps in implementing huffman coding. More details can be found in the language specific descriptions.
There are two parts to an implementation: a compression program and an uncompression/decompression program. You need both to have a useful compression utility. We'll assume these are separate programs, but they share many classes, functions, modules, code or whatever unit-of-programming you're using. We'll call the program that reads a regular file and produces a compressed file the compression or huffing program. The program that does the reverse, producing a regular file from a compressed file, will be called the uncompression or unhuffing program.
To compress a file (sequence of characters) you need a table of bit encodings, e.g., an ASCII table, or a table giving a sequence of bits that's used to encode each character. This table is constructed from a coding tree using root-to-leaf paths to generate the bit sequence that encodes each character.
Assuming you can write a specific number of bits at a time to a file, a compressed file is made using the following top-level steps. These steps will be developed further into sub-steps, and you'll eventually implement a program based on these ideas and sub-steps.
Build a table of per-character encodings. The table may be given to you, e.g., an ASCII table, or you may build the table from a Huffman coding tree.
Read the file to be compressed (the plain file) and process one character at a time. To process each character find the bit sequence that encodes the character using the table built in the previous step and write this bit sequence to the compressed file.
As an example, we'll use the table below on the left, which is generated from the tree on the right. Ignore the weights on the nodes, we'll use those when we discuss how the tree is created.
To compress the string/file "streets are stone stars are not", we read one character at a time and write the sequence of bits that encodes each character. To encode "streets are" we would write the following bits:
The bits would be written in the order 010, 011, 101, 11, 11, 011, 010, 001, 100, 101, 11.
That's the compression program. Two things are missing from the compressed file: (1) some information (called the header) must be written at the beginning of the compressed file that will allow it to be uncompressed; (2) some information must be written at the end of the file that will be used by the uncompression program to tell when the compressed bit sequence is over (this is the bit sequence for the pseudo-eof character described later).
To build a table of optimal per-character bit sequences you'll need to build a Huffman coding tree using the greedy Huffman algorithm. The table is generated by following every root-to-leaf path and recording the left/right 0/1 edges followed. These paths make the optimal encoding bit sequences for each character.
There are three steps in creating the table:
Count the number of times every character occurs. Use these counts to create an initial forest of one-node trees. Each node has a character and a weight equal to the number of times the character occurs. An example of one node trees shows what the initial forest looks like.
Use the greedy Huffman algorithm to build a single tree. The final tree will be used in the next step.
Follow every root-to-leaf path creating a table of bit sequence encodings for every character/leaf.
You must store some initial information in the compressed file that will be used by the uncompression/unhuffing program. Basically you must store the tree used to compress the original file. This tree is used by the uncompression program.
There are several alternatives for storing the tree. Some are outlined here, you may explore others as part of the specifications of your assignment.
Store the character counts at the beginning of the file. You can store counts for every character, or counts for the non-zero characters. If you do the latter, you must include some method for indicating the character, e.g., store character/count pairs.
You could use a "standard" character frequency, e.g., for any English language text you could assume weights/frequencies for every character and use these in constructing the tree for both compression and uncompression.
You can store the tree at the beginning of the file. One method for doing this is to do a pre-order traversal, writing each node visited. You must differentiate leaf nodes from internal/non-leaf nodes. One way to do this is write a single bit for each node, say 1 for leaf and 0 for non-leaf. For leaf nodes, you will also need to write the character stored. For non-leaf nodes there's no information that needs to be written, just the bit that indicates there's an internal node.
In particular, it is not possible to write just one single bit to a file, all output is actually done in "chunks", e.g., it might be done in eight-bit chunks. In any case, when you write 3 bits, then 2 bits, then 10 bits, all the bits are eventually written, but you cannot be sure precisely when they're written during the execution of your program. Also, because of buffering, if all output is done in eight-bit chunks and your program writes exactly 61 bits explicitly, then 3 extra bits will be written so that the number of bits written is a multiple of eight. Your decompressing/unhuff program must have some mechanism to account for these extra or "padding" bits since these bits do not represent compressed information.
Your decompression/unhuff program cannot simply read bits until there are no more left since your program might then read the extra padding bits written due to buffering. This means that when reading a compressed file, you CANNOT use code like this.
Every time a file is compressed the count of the the number of times the pseudo-EOF character occurs should be one --- this should be done explicitly in the code that determines frequency counts. In other words, a pseudo-char EOF with number of occurrences (count) of 1 must be explicitly created and used in creating the tree used for compression. | http://www.cs.duke.edu/csed/poop/huff/info/ | 13 |
21 | In computer science
, the Floyd–Warshall algorithm
(sometimes known as the WFI Algorithm
or Roy–Floyd algorithm
, since Bernard Roy
described this algorithm in 1959
) is a graph
for finding shortest paths
in a weighted, directed graph. A single execution of the algorithm will find the shortest paths between all pairs of vertices. The Floyd–Warshall algorithm
is an example of dynamic programming
The Floyd-Warshall algorithm compares all possible paths through the graph between each pair of vertices. It is able to do this with only
comparisons. This is remarkable considering that there may be up to
edges in the graph, and every combination of edges is tested. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is known to be optimal.
Consider a graph with vertices , each numbered 1 through N. Further consider a function that returns the shortest possible path from to using only vertices 1 through as intermediate points along the way. Now, given this function, our goal is to find the shortest path from each to each using only nodes 1 through .
There are two candidates for this path: either the true shortest path only uses nodes in the set ; or there exists some path that goes from to , then from to that is better. We know that the best path from to that only uses nodes 1 through is defined by , and it is clear that if there were a better path from to to , then the length of this path would be the concatenation of the shortest path from to (using vertices in ) and the shortest path from to (also using vertices in ).
Therefore, we can define in terms of the following recursive formula:
This formula is the heart of Floyd Warshall. The algorithm works by first computing for all (i,j) pairs, then using that to find for all pairs, etc. This process continues until k=n, and we have found the shortest path for all pairs using any intermediate vertices.
Conveniently, when calculating the
th case, one can overwrite the information saved from the computation of
. This means the algorithm uses quadratic memory. Be careful to note the initialization conditions:
1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j
2 (infinity if there is none).
3 Also assume that n is the number of vertices and edgeCost(i,i)=0
6 int path;
7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path
8 from i to j using intermediate values in (1..k-1). Each path[i][j] is initialized to
12 procedure FloydWarshall ()
13 for to
14 for each in
15 path[i][j] = min (path[i][j], path[i][k]+path[k][j] );
Behaviour with negative cycles
For numerically meaningful output, Floyd-Warshall assumes that there are no negative cycles (in fact, between any pair of vertices which form part of a negative cycle, the shortest path is not well-defined because the path can be infinitely small). Nevertheless, if there are negative cycles, Floyd–Warshall can be used to detect them. A negative cycle can be detected if the path
matrix contains a negative number along the diagonal. If path[i][i] is negative for some vertex i, then this vertex belongs to at least one negative cycle.
To find all
from those of
bit operations. Since we begin with
and compute the sequence of
, the total number of bit operations used is
. Therefore, the complexity
of the algorithm is
and can be solved by a deterministic machine
in polynomial time
Applications and generalizations
The Floyd–Warshall algorithm can be used to solve the following problems, among others:
- Shortest paths in directed graphs (Floyd's algorithm).
- Transitive closure of directed graphs (Warshall's algorithm). In Warshall's original formulation of the algorithm, the graph is unweighted and represented by a Boolean adjacency matrix. Then the addition operation is replaced by logical conjunction (AND) and the minimum operation by logical disjunction (OR).
- Finding a regular expression denoting the regular language accepted by a finite automaton (Kleene's algorithm)
- Inversion of real matrices (Gauss-Jordan algorithm).
- Optimal routing. In this application one is interested in finding the path with the maximum flow between two vertices. This means that, rather than taking minima as in the pseudocode above, one instead takes maxima. The edge weights represent fixed constraints on flow. Path weights represent bottlenecks; so the addition operation above is replaced by the minimum operation.
- Testing whether an undirected graph is bipartite.
Floyd, Robert W. "Algorithm 97: Shortest Path". Communications of the ACM 5 (6): 345.
Kleene, S. C. Automata Studies. Princeton University Press.
Warshall, Stephen "A theorem on Boolean matrices". Journal of the ACM 9 (1): 11–12.
Kenneth H. Rosen (2003). Discrete Mathematics and Its Applications, 5th Edition. Addison Wesley.
- Section 26.2, "The Floyd–Warshall algorithm", pp. 558–565;
- Section 26.4, "A general framework for solving path problems in directed graphs", pp. 570–576. | http://www.reference.com/browse/Warshall+Algorithm | 13 |
20 | Debates are a staple of middle and high school social studies classes. But have you ever thought about using debates at the lower grades -- or in math class? Education World offers five debate strategies and extra lessons for students of all ages. Included: Debate fairy tale ethics, use four corner and inner/outer circle strategies, more.
All you need to have a great classroom debate is an interesting topic -- such as the ones above -- to engage students ...
Well, perhaps that point could be debated -- but theres no debating the fact that this weeks Lesson Planning article provides all the resources you need for great classroom debates. Aside from high-interest debate topics, this Education World resource provides sample debate formats, a few rules for kids to remember, a bunch of fun strategies, and a handful of great lesson ideas!
This week, Education World provides five lessons that are sure to make the most of your next classroom debate. Click each of the five lesson headlines below for a complete teaching resource. (Appropriate grade levels for each lesson appear in parentheses.)
Stage a Debate: A Primer for Teachers (Lincoln-Douglas Debate Format)
Adapt the standard debate format plus ten strategies for engaging students in debate! (Grades 3-12)
Role Play Debate
Students assume the roles of various stakeholders in debates on issues of high interest. (Grades 3-12)
Using Fairy Tales to Debate Ethics
Three fairy tales challenge students to think about honesty, right and wrong, and other questions of ethics. (Grades K-8)
Four Corners Debate
A debate strategy gets kids thinking and moving. Debate topics included for all grades. (Grades K-12)
Inner Circle, Outer Circle Debate Strategy
The inner/outer circle debate strategy emphasizes listening to others views and writing an opinion essay. (Grades 3-12)
Click here for resources related to debate rules, rubrics for measuring student participation, a list of debate topics for classroom use, additional debate lesson plans, and special strategies for engaging kids in debates! | http://www.educationworld.com/a_lesson/lesson/lesson304.shtml | 13 |
30 | |Part of the series on|
In logic, an argument (Latin argumentum - "proof, evidence, token, subject, contents") is a connected series of statements or propositions which are intended to provide support, justification or evidence for the truth of another statement.
A deductive argument asserts that the truth of the conclusion is a logical consequence of the premises; an inductive argument asserts that the truth of the conclusion is supported by the premises. Deductive arguments are judged by the properties of validity and soundness. An argument is valid if and only if the conclusion is a logical consequence of the premises. A sound argument is a valid argument with true premises.
For example, the following is a valid argument (because the conclusion follows from the premises) and also sound (because additionally the premises are true):
- Premise 1. All Greeks are human.
- Premise 2. All humans are mortal.
- Argument. Take an arbitrary Greek. By premise 1 this Greek is human; by premise 2 this human is mortal. Thus, this arbitrary Greek is mortal.
- Conclusion. Therefore, all Greeks are mortal.
Invalid arguments involve several fallacies that do not satisfy the idea that an argument must deduce a conclusion that is logically coherent. A common example is the non sequitur, where the conclusion is completely disconnected from the premises.
Not all fallacious arguments are invalid. In a circular argument, the conclusion actually is a premise, so the argument is trivially valid. It is completely uninformative, however, and doesn't really prove anything.
From the source
An odd way of looking at arguments, and possibly how not to do them properly, is the following sketch:
M: I came here for a good argument.
A: No you didn't; no, you came here for an argument.
M: An argument isn't just contradiction.
A: It can be.
M: No it can't. An argument is a connected series of statements intended to establish a proposition.
A: No it isn't.
M: Yes it is! It's not just contradiction.
A: Look, if I argue with you, I must take up a contrary position.
M: Yes, but that's not just saying 'No it isn't.'
A: Yes it is!
M: No it isn't! | http://rationalwiki.org/w/index.php?title=Argument&oldid=998597 | 13 |
47 | Updated May 2009
Below you will find links that will take you to the best resources on the Web for
- debate rules;
- debate rubrics for student assessment;
- debate topics for classroom use;
- more debate lesson plans; and
- fun debate strategies.
Use one of these rubrics to assess student performance, or adapt the rubrics to create one that meets your needs:
- Ideas for Debate Topics
This teacher-created list contains more than three dozen topics, mostly about student-centered issues (Should students be required to wear uniforms to school? or Should students be permitted to go to PG-13 movies? ) Included: A good list of topics of interest to students at the elementary or middle school level.
- Social Issues Homework Center
This Web page from the Multnomah County Library in Portland, Oregon, offers links to resources related to more than two dozen debate topics including affirmative action, animal rights, child labor, gangs, and flag burning.
- IDEA Debatabase
This database/search engine links students to resources for debates on issues related to culture, the environment and animal welfare, science and technology, sports, more. Plus a database of debate skill-building exercises.
- High School Debate Topics
The site offers links to resources related to debate topics that include mental health care policy, weapons of mass destruction, privacy issues, renewable energy, juvenile crime, more.
The following fun strategies can be used to engage students and vary the debate structure by involving the entire class in different ways:
- Three-Card strategy -- This technique can be used as a pre-debate strategy to help students gather information about topics they might not know a lot about. It can also be used after students observe two groups in a debate, when the debatable question is put up for full classroom discussion. This strategy provides opportunities for all students to participate in discussions that might otherwise be monopolized by students who are frequent participators. In this strategy, the teacher provides each student with two or three cards on which are printed the words "Comment or Question." When a student wishes to make a point as part of the discussion, he or she raises one of the cards; after making a comment or asking a question pertinent to the discussion, the student turns in the card. This strategy encourages participants to think before jumping in; those who are usually frequent participants in classroom discussions must weigh whether the point they wish to make is valuable enough to turn in a card. When a student has used all the cards, he or she cannot participate again in the discussion until all students have used all their cards.
- Participation Countdown strategy -- Similar to the technique above, the countdown strategy helps students monitor their participation, so they don't monopolize the discussion. In this strategy, students raise a hand when they have something to say. The second time they have something to say, they must raise their hand with one finger pointing up (to indicate they have already participated once). When they raise their hand a third time, they do so with two fingers pointing up (to indicate they have participated twice before). After a student has participated three times, he or she cannot share again as long as any other student has something to add to the discussion.
- Tag Team Debate strategy -- This strategy can be used to help students learn about a topic before a debate, but it is probably better used when opening up discussion after a formal debate or as an alternative to the Lincoln-Douglas format. In a tag team debate, each team of five members represents one side of a debatable question. Each team has a set amount of time (say, 5 minutes) to present its point of view. When it's time for the team to state its point of view, one speaker from the team takes the floor. That speaker can speak for no more than 1 minute, and must "tag" another member of the team to pick up the argument before his or her minute is up. Team members who are eager to pick up a point or add to the team's argument, can put out a hand to be tagged. That way, the current speaker knows who might be ready to pick up the team's argument. No member of the team can be tagged twice until all members have been tagged once.
- Role Play Debate strategy -- In the Lincoln-Douglas debate format, students play the roles of Constructor, Cross-Examiner, and so on. But many topics lend themselves to a different form of debate -- the role-play debate. In a role-play debate, students examine different points of view or perspectives related to an issue. See a sample lesson: Role Play Debate.
- Fishbowl strategy -- This strategy helps focus the attention of students not immediately involved in the current classroom debate; or it can be used to put the most skilled and confident debaters center stage, as they model proper debate form and etiquette. As the debaters sit center-stage (in the "fishbowl"), other students observe the action from outside the fishbowl. To actively involve observers, appoint them to judge the debate; have each observer keep a running tally of new points introduced by each side as the debate progresses. Note: If you plan to use debates in the future, it might be a good idea to videotape the final student debates your current students present. Those videos can be used to help this year's students evaluate their participation, and students in the videos can serve as the "fishbowl" group when you introduce the debate structure to future students. Another alternative: Watch one of the Online Debate Videos from Debate Central.
- Inner Circle/Outer Circle strategy -- This strategy, billed as a pre-writing strategy for editorial opinion pieces, helps students gather facts and ideas about an issue up for debate. It focuses students on listening carefully to their classmates. The strategy can be used as an information-gathering session prior to a debate or as the structure for the actual debate. See a sample lesson: Inner Circle/Outer Circle Debate.
- Think-Pair-Share Debate strategy -- This strategy can be used during the information gathering part of a debate or as a stand-alone strategy. Students start the activity by gathering information on their own. Give students about 10 minutes to think and make notes. Next, pair each student with another student; give the pair about 10 minutes to share their ideas, combine their notes, and think more deeply about the topic. Then pair those students with another pair; give them about 10 minutes to share their thoughts and gather more notes… Eventually, the entire class will come together to share information they have gathered about the topic. Then students will be ready to knowledgably debate the issue at hand. See the Think-Pair-Share strategy in action in an Education World article, Discussion Webs in the Classroom.
- Four Corners Debate strategy -- In this active debate strategy, students take one of four positions on an issue. They either strongly agree, agree, disagree, or strongly disagree. See a sample lesson: Four Corners Debate.
- Graphic Organizer strategy -- A simple graphic organizer enables students to compare and contrast, to visualize, and to construct their position on any debatable question. See a sample lesson using a simple two-column comparison graphic organizer in the Education World article Discussion Webs in the Classroom.
- Focus Discussions strategy -- The standard rules for a Lincoln-Douglas style debate allow students 3 minutes to prepare their arguments. The debatable question/policy is not introduced prior to that time. If your students might benefit from some research and/or discussion before the debate, you might pose the question and then have students spend one class period (or less or more) gathering information about the issue's affirmative arguments (no negative arguments allowed) and the same amount of time on the negative arguments (no affirmative arguments allowed). See a sample lesson: Human Nature: Good or Evil?.
Return to this week's Lesson Planning article, It's Up for Debate!, for five debate strategy lesson plans. | http://www.educationworld.com/a_lesson/lesson/lesson304b.shtml | 13 |
15 | Transportation Geography and Network Science/Algorithms
Dijkstra's algorithm
The centrality measures requires finding the shortest path from one node to another. The most widely-used algorithm is Dijkstra's algorithm. A greedy algorithm, the Dijkstra's algorithm starts at the source node and gradually spans a tree to all nodes reachable. Nodes that provide the shortest distance in that round are added to the tree by sequence.
Dijkstra’s Algorithm is regarded as one of the most efficient algorithms to calculate the shortest path.
For the directed graph defined previously, a shortest (travel time) path between origin node to every other node in the network can be calculated using this algorithm. Let be the distance matrix of with representing the length (travel time) of the links from node to node , if nodes and are not connected then the corresponding element in is a very large number.
Say the set of nodes is divided into two different subsets and , where represents the set of nodes to which shortest path from is known and the complementary set of is . Let represent a vector of permanent labels of nodes corresponding to every node in . The permanent label of a node is the shortest distance of the node from the origin node . Say is the set of nodes that are adjacent to nodes in along the shortest path. Let be a vector of temporary labels of nodes corresponding to nodes in . The steps involved in the algorithm are explained below.
- Initialize , and all elements of are assigned a large number.
- Find all the child nodes that are adjacent to any node – parent node - in that are not already in using the distance matrix . Then calculate a temporary label to each of the child node by summing the permanent label of the parent node from the vector and length of the link .
- Select the node with smallest temporary label and add it at the end of the set , deleting it in . Then add the corresponding parent node at the end of the set and corresponding temporary label to . With the new repeat the steps from 2 until there are no elements in .
To get the length of the shortest path from the origin node r to any other node in the network, look for the position of the destination node in and the corresponding element in gives the length of the shortest path. To get the shortest path itself, look for the position of the destination node s in and read the element that occupies that same position in then again look for the position of this new node in and repeat the process until the node is reached in , i.e. tracing the shortest path backward until the origin is reached. The links that lie along the shortest path from origin node to destination node s calculated this way are assigned to a set .
The above algorithm is done for a single origin node r, but the shortest path from every node to every other node is required for trip distribution and traffic assignment. In that case perform Dijkstra’s algorithm for every node in the graph .
Further details of the algorithm can be found in .
Community detection
The goal of community detection is to cluster the network into groups of nodes that have few connections between these nodes. . A cluster is a collection of nodes that are similar in terms of connections. One basic algorithm to identify clusters is the hierarchical clustering algorithm which creates tree-structure hierarchy of clusters. The leaves are the nodes and the root is a single cluster consisting of all nodes. The hierarchies of leaves show the communities of different levels. The details about the algorithm can be found in Hierarchical clustering .
- Dijkstra's algorithm http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
- Newman, M.E. Networks. 2010. Oxford: Oxford University Press. | http://en.wikibooks.org/wiki/Transportation_Geography_and_Network_Science/Algorithms | 13 |
18 | An Overview of Text to Speech
This page provides a very brief outline of text to speech.
Text to speech, or TTS for short, is the automatic conversion of text
into speech. The device or computer program that does this is a TTS
synthesiser, and if it is a computer program it is often called a TTS Engine.
You do not, of course, need to know all about TTS to be able to use a TTS Engine
like Orpheus, but please read on to find out more.
A typical TTS engine comprises several components. Depending on the
nature of a particular TTS engine, these include those which:
- Resolve uncertainties (ambiguities) in the text, and convert common
abbreviations like Mr to Mister, or recognise dates and currencies as
differing from just ordinary numbers
- Convert the text to their corresponding speech sounds (phonemes) using a
pronunciation dictionary (lexicon), pronunciation rules, statistical methods
predicting most likely pronunciations, or a combination of all these
- Generate information, often called prosody, which describes how to
speak the sounds, for instance their duration and the pitch of the voice
- Convert the phonemes and prosody to an audio speech signal
There are other sub-systems that may also be used, but all TTS engines have
the equivalent of the above. Some of these can be very sophisticated and
may use very complex natural language processing methods, for instance
especially to resolve ambiguities in the text, or to help pronunciations that depend
on grammar or the context of the text. Consider the difficulty determining
the pronunciation of words like "row" and "bow" in phrases like:
"There was a row."
"The violin player took a bow."
We human readers can have problems at times working these pronunciations out.
So it is no wonder that TTS engines sometimes get it wrong also!
The last stage in the synthesis process, the one that turns the phonemes and prosody of the
speech into the speech signal you hear, is often used to label the type of the TTS
Engine. This is probably because this stage broadly determines what the synthesised
speech sounds like.
The last stage may fall into one of a number of broad classes.
Formant synthesis uses a relatively simple system to
select from a small number of parameters, with which to control a mathematical
model of speech sounds. A set of parameters is picked for each speech
sound and they are then joined up to make the speech. This stream of
parameters is then turned into synthetic speech using the model.
Articulatory synthesis also mathematically models
speech production, but models the speech production mechanism itself using a
complex physical model of the human vocal tract.
Concatenative synthesis does not use these sorts of
model directly, but instead uses a database of fragments, or units, of recorded
and coded speech and extracts from it the best string of units to stitch together
to form the synthetic speech.
A bit more about each of these follows.
Formant synthesis systems synthesise speech using an acoustic model of
the speech signal. This means that they model the speech spectrum and
its changes in time as we speak, rather than the production mechanisms
themselves. Formant synthesis systems are sometimes referred to as
synthesis-by-rule systems or more usually formant synthesisers.
Commercial TTS engines using formant synthesis have been around for many years.
DecTalk, Apollo, Orpheus and Eloquence are well known TTS engines that use formant
Formant synthesis is not a very computationally intensive process especially
for today's computing systems. The strength of formant synthesis is its relative
simplicity and the small memory footprint needed for the engine and its voice data.
This can be important for embedded and mobile computing applications. Another
less often reported strength is that the speech is intelligible and can be highly so
under difficult listening conditions. This is partly because, although the speech
is not natural sounding, all instances of a particular speech sound are somewhat the same.
It is thought that with training, this sameness may help some listeners spot sounds in
speech at unnaturally fast talking rates.
The weakness of rule-based formant synthesis is that the speech does not sound natural.
This is because of the simplicity of the models used; it is very difficult, if not impossible,
to model, those subtleties of speech that give rise to a perception of naturalness.
Articulatory synthesisers model human speech production
mechanisms directly rather than the sounds generated; in some cases they might
give more natural sounding speech than formant synthesis. They classify
speech in terms of movements of the articulators, the tongue, lips and velum,
and the vibrations of the vocal chords. Text to be synthesised is converted
from a phonetic and prosodic description into a sequence of such movements and
the synchronisations between their movements calculated. A complex computational
model of the physics of a human vocal tract is then used to generate a speech signal
under control of these movements. Articulatory synthesis is a computationally
intensive process and is not widely available outside the laboratory.
Concatenative Synthesis and Unit Selection
A broad class of TTS engines use a database of recorded and coded speech
units from which to synthesise the speech. These are often termed concatenative
synthesisers, and the process concatenative synthesis.
Depending on the process used to pick units to be spoken these types of TTS Engine
may also be referred to as unit selection synthesisers or be said to use
Unit selection is a very popular technique in modern TTS engines and has
the ability to create highly natural sounding speech. In unit selection a recorded
database is analysed and labelled to define the speech units. These can be arbitrary
pieces of speech, half a phoneme (demi-phone), phonemes, diphones (two adjacent half
phonemes), syllables, demi-syllables, or words and whole phrases, or statistically selected
arbitrary pieces of speech.
Typically, a set of cost functions is then used. The unit selection process then picks
units that minimise the overall cost of their selection. A variety of cost calculations
may be used. However, they all measure a concept of 'distance' between a speech unit
and its environment in the database and the ideal unit at the point in the speech for which
the unit is a candidate. The concept of distance includes such things as the units' durations,
pitch, the identity of adjacent units, and the smoothness of the joins to adjacent units
in the resulting synthesis.
Generally, the selected units will not match perfectly the required duration and pitch
and have to be adjusted. It may just be that the required 'prosodic' adjustments are so small
that they do not need to be made, there being enough variants of each unit in the database
to satisfy the needs of normal synthesis.
The great advantage of unit selection is that it generates natural sounding voices.
When used with a large well prepared database and sophisticated methods of selecting units,
speaking in situations where few or no prosodic adjustments are required, then
the naturalness can be stunning.
If adjustment is required, naturalness and voice quality can be affected, and it is
more of a problem the greater the adjustment needed. Users of unit selection TTS engines
who want to use speech at fast-talking rates may be a surprised at the reduction in
voice quality that can then arise.
Although concatenative synthesis is not generally computationally intensive, the unit
selection process can be. A huge number of combinations of units may have to be tried
and rejected as too costly before the best are found. The search for lowest cost units
has to be done before an utterance can be spoken. This computation can give rise to
a discernable delay, or latency, before speaking begins. This is especially
true if the voice database is large. For some applications the delay may be
unacceptable. TTS Engine designers employ special techniques to shorten the search with
minimal affect on the speech quality for this reason. | http://speech.meridian-one.co.uk/tts01.html | 13 |
15 | The logarithm of a number is the exponent to which another fixed value, the base, must be raised to produce that number. For example, the logarithm of 1000 to base 10 is 3, because 1000 is 10 to the power 3: 1000 = 10 × 10 × 10 = 103. More generally, if x = by, then y is the logarithm of x to base b, and is written y = logb(x), so log10(1000) = 3.
The logarithm to base b = 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the constant e (≈ 2.718) as its base; its use is widespread in pure mathematics, especially calculus. The binary logarithm uses base b = 2 and is prominent in computer science.
Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations. They were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. Tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition because of the fact—important in its own right—that the logarithm of a product is the sum of the logarithms of the factors:
Logarithmic scales reduce wide-ranging quantities to smaller scopes. For example, the decibel is a logarithmic unit quantifying sound pressure and voltage ratios. In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They describe musical intervals, appear in formulae counting prime numbers, inform some models in psychophysics, and can aid in forensic accounting.
In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant; it has applications in public-key cryptography.
Motivation and definition
The idea of logarithms is to reverse the operation of exponentiation, that is raising a number to a power. For example, the third power (or cube) of 2 is 8, because 8 is the product of three factors of 2:
It follows that the logarithm of 8 with respect to base 2 is 3, so log2 8 = 3.
The third power of some number b is the product of three factors of b. More generally, raising b to the n-th power, where n is a natural number, is done by multiplying n factors of b. The n-th power of b is written bn, so that
Exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, 1/b. (For further details, including the formula bm + n = bm · bn, see exponentiation or for an elementary treatise.)
The logarithm of a number x with respect to base b is the exponent by which b must be raised to yield x. In other words, the logarithm of x to base b is the solution y to the equation
The logarithm is denoted "logb(x)" (pronounced as "the logarithm of x to base b" or "the base-b logarithm of x"). In the equation y = logb(x), the value y is the answer to the question "To what power must b be raised, in order to yield x?". To define the logarithm, the base b must be a positive real number not equal to 1 and x must be a positive number.[nb 1]
For example, log2(16) = 4, since 24 = 2 ×2 × 2 × 2 = 16. Logarithms can also be negative:
A third example: log10(150) is approximately 2.176, which lies between 2 and 3, just as 150 lies between 102 = 100 and 103 = 1000. Finally, for any base b, logb(b) = 1 and logb(1) = 0, since b1 = b and b0 = 1, respectively.
Several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another.
Product, quotient, power and root
The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the p-th power of a number is p times the logarithm of the number itself; the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples:
Change of base
The logarithm logb(x) can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula:
Given a number x and its logarithm logb(x) to an unknown base b, the base is given by:
Among all choices for the base b, three are particularly common. These are b = 10, b = e (the irrational mathematical constant ≈ 2.71828), and b = 2. In mathematical analysis, the logarithm to base e is widespread because of its particular analytical properties explained below. On the other hand, base-10 logarithms are easy to use for manual calculations in the decimal number system:
Thus, log10(x) is related to the number of decimal digits of a positive integer x: the number of digits is the smallest integer strictly bigger than log10(x). For example, log10(1430) is approximately 3.15. The next integer is 4, which is the number of digits of 1430. The logarithm to base two is used in computer science, where the binary system is ubiquitous, and in music theory, where a pitch ratio of two (the octave) is ubiquitous and the cent is the binary logarithm (scaled by 1200) of the ratio between two pitches.
The following table lists common notations for logarithms to these bases and the fields where they are used. Many disciplines write log(x) instead of logb(x), when the intended base can be determined from the context. The notation blog(x) also occurs. The "ISO notation" column lists designations suggested by the International Organization for Standardization (ISO 31-11).
|Base b||Name for logb(x)||ISO notation||Other notations||Used in|
|2||binary logarithm||lb(x)||ld(x), log(x), lg(x)||computer science, information theory, mathematics, music theory|
|e||natural logarithm||ln(x)[nb 2]||log(x)
(in mathematics and many programming languages[nb 3])
|mathematical analysis, physics, chemistry,
statistics, economics, and some engineering fields
(in engineering, biology, astronomy),
|various engineering fields (see decibel and see below),
logarithm tables, handheld calculators, spectroscopy
The Babylonians sometime in 2000–1600 BC may have invented the quarter square multiplication algorithm to multiply two numbers using only addition, subtraction and a table of squares. However it could not be used for division without an additional table of reciprocals. Large tables of quarter squares were used to simplify the accurate multiplication of large numbers from 1817 onwards until this was superseded by the use of computers.
The Indian mathematician Virasena worked with the concept of ardhaccheda: the number of times a number of the form 2n could be halved. For exact powers of 2, this is the logarithm to that base, which is a whole number; for other numbers, it is undefined. He described relations such as the product formula and also introduced integer logarithms in base 3 (trakacheda) and base 4 (caturthacheda)
In the 16th and early 17th centuries an algorithm called prosthaphaeresis was used to approximate multiplication and division. This used the trigonometric identity
or similar to convert the multiplications to additions and table lookups. However logarithms are more straightforward and require less work. It can be shown using complex numbers that this is basically the same technique.
From Napier to Euler
The method of logarithms was publicly propounded by John Napier in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Rule of Logarithms).Joost Bürgi independently invented logarithms but published six years after Napier.
...the accent in calculation led Justus Byrgius [Joost Bürgi] on the way to these very logarithms many years before Napier's system appeared; but ...instead of rearing up his child for the public benefit he deserted it in the birth.—Johannes Kepler, Rudolphine Tables (1627)
By repeated subtractions Napier calculated (1 − 10−7)L for L ranging from 1 to 100. The result for L=100 is approximately 0.99999 = 1 − 10−5. Napier then calculated the products of these numbers with 107(1 − 10−5)L for L from 1 to 50, and did similarly with 0.9998 ≈ (1 − 10−5)20 and 0.9 ≈ 0.99520. These computations, which occupied 20 years, allowed him to give, for any number N from 5 to 10 million, the number L that solves the equation
Napier first called L an "artificial number", but later introduced the word "logarithm" to mean a number that indicates a ratio: λόγος (logos) meaning proportion, and ἀριθμός (arithmos) meaning number. In modern notation, the relation to natural logarithms is:
where the very close approximation corresponds to the observation that
The invention was quickly and widely met with acclaim. The works of Bonaventura Cavalieri (Italy), Edmund Wingate (France), Xue Fengzuo (China), and Johannes Kepler's Chilias logarithmorum (Germany) helped spread the concept further.
In 1647 Grégoire de Saint-Vincent related logarithms to the quadrature of the hyperbola, by pointing out that the area f(t) under the hyperbola from x = 1 to x = t satisfies
The natural logarithm was first described by Nicholas Mercator in his work Logarithmotechnia published in 1668, although the mathematics teacher John Speidell had already in 1619 compiled a table on the natural logarithm. Around 1730, Leonhard Euler defined the exponential function and the natural logarithm by
Logarithm tables, slide rules, and historical applications
By simplifying difficult calculations, logarithms contributed to the advance of science, and especially of astronomy. They were critical to advances in surveying, celestial navigation, and other domains. Pierre-Simon Laplace called logarithms
- "...[a]n admirable artifice which, by reducing to a few days the labour of many months, doubles the life of the astronomer, and spares him the errors and disgust inseparable from long calculations."
A key tool that enabled the practical use of logarithms before calculators and computers was the table of logarithms. The first such table was compiled by Henry Briggs in 1617, immediately after Napier's invention. Subsequently, tables with increasing scope and precision were written. These tables listed the values of logb(x) and bx for any number x in a certain range, at a certain precision, for a certain base b (usually b = 10). For example, Briggs' first table contained the common logarithms of all integers in the range 1–1000, with a precision of 8 digits. As the function f(x) = bx is the inverse function of logb(x), it has been called the antilogarithm. The product and quotient of two positive numbers c and d were routinely calculated as the sum and difference of their logarithms. The product cd or quotient c/d came from looking up the antilogarithm of the sum or difference, also via the same table:
For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities. Calculations of powers and roots are reduced to multiplications or divisions and look-ups by
Many logarithm tables give logarithms by separately providing the characteristic and mantissa of x, that is to say, the integer part and the fractional part of log10(x). The characteristic of 10 · x is one plus the characteristic of x, and their significands are the same. This extends the scope of logarithm tables: given a table listing log10(x) for all integers x ranging from 1 to 1000, the logarithm of 3542 is approximated by
Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation, as illustrated here:
The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms. For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables.
A deeper study of logarithms requires the concept of a function. A function is a rule that, given one number, produces another number. An example is the function producing the x-th power of b from any real number x, where the base b is a fixed number. This function is written
To justify the definition of logarithms, it is necessary to show that the equation
has a solution x and that this solution is unique, provided that y is positive and that b is positive and unequal to 1. A proof of that fact requires the intermediate value theorem from elementary calculus. This theorem states that a continuous function that produces two values m and n also produces any value that lies between m and n. A function is continuous if it does not "jump", that is, if its graph can be drawn without lifting the pen.
This property can be shown to hold for the function f(x) = bx. Because f takes arbitrarily large and arbitrarily small positive values, any number y > 0 lies between f(x0) and f(x1) for suitable x0 and x1. Hence, the intermediate value theorem ensures that the equation f(x) = y has a solution. Moreover, there is only one solution to this equation, because the function f is strictly increasing (for b > 1), or strictly decreasing (for 0 < b < 1).
The unique solution x is the logarithm of y to base b, logb(y). The function that assigns to y its logarithm is called logarithm function or logarithmic function (or just logarithm).
The formula for the logarithm of a power says in particular that for any number x,
In prose, taking the x-th power of b and then the base-b logarithm gives back x. Conversely, given a positive number y, the formula
says that first taking the logarithm and then exponentiating gives back y. Thus, the two possible ways of combining (or composing) logarithms and exponentiation give back the original number. Therefore, the logarithm to base b is the inverse function of f(x) = bx.
Inverse functions are closely related to the original functions. Their graphs correspond to each other upon exchanging the x- and the y-coordinates (or upon reflection at the diagonal line x = y), as shown at the right: a point (t, u = bt) on the graph of f yields a point (u, t = logbu) on the graph of the logarithm and vice versa. As a consequence, logb(x) diverges to infinity (gets bigger than any given number) if x grows to infinity, provided that b is greater than one. In that case, logb(x) is an increasing function. For b < 1, logb(x) tends to minus infinity instead. When x approaches zero, logb(x) goes to minus infinity for b > 1 (plus infinity for b < 1, respectively).
Derivative and antiderivative
Analytic properties of functions pass to their inverses. Thus, as f(x) = bx is a continuous and differentiable function, so is logb(y). Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the derivative of f(x) evaluates to ln(b)bx by the properties of the exponential function, the chain rule implies that the derivative of logb(x) is given by
That is, the slope of the tangent touching the graph of the base-b logarithm at the point (x, logb(x)) equals 1/(x ln(b)). In particular, the derivative of ln(x) is 1/x, which implies that the antiderivative of 1/x is ln(x) + C. The derivative with a generalised functional argument f(x) is
The quotient at the right hand side is called the logarithmic derivative of f. Computing f'(x) by means of the derivative of ln(f(x)) is known as logarithmic differentiation. The antiderivative of the natural logarithm ln(x) is:
Integral representation of the natural logarithm
The natural logarithm of t agrees with the integral of 1/x dx from 1 to t:
In other words, ln(t) equals the area between the x axis and the graph of the function 1/x, ranging from x = 1 to x = t (figure at the right). This is a consequence of the fundamental theorem of calculus and the fact that derivative of ln(x) is 1/x. The right hand side of this equation can serve as a definition of the natural logarithm. Product and power logarithm formulas can be derived from this definition. For example, the product formula ln(tu) = ln(t) + ln(u) is deduced as:
The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (w = x/t). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor t and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function f(x) = 1/x again. Therefore, the left hand blue area, which is the integral of f(x) from t to tu is the same as the integral from 1 to u. This justifies the equality (2) with a more geometric proof.
The power formula ln(tr) = r ln(t) may be derived in a similar way:
The second equality uses a change of variables (integration by substitution), w = x1/r.
The sum over the reciprocals of natural numbers,
There is also another integral representation of the logarithm that is useful in some situations.
This can be verified by showing that it has the same value at x = 1, and the same derivative.
Transcendence of the logarithm
The logarithm is an example of a transcendental function and from a theoretical point of view, the Gelfond–Schneider theorem asserts that logarithms usually take "difficult" values. The formal statement relies on the notion of algebraic numbers, which includes all rational numbers, but also numbers such as the square root of 2 or
Complex numbers that are not algebraic are called transcendental; for example, π and e are such numbers. Almost all complex numbers are transcendental. Using these notions, the Gelfond–Scheider theorem states that given two algebraic numbers a and b, logb(a) is either a transcendental number or a rational number p / q (in which case aq = bp, so a and b were closely related to begin with).
Logarithms are easy to compute in some cases, such as log10(1,000) = 3. In general, logarithms can be calculated using power series or the arithmetic-geometric mean, or be retrieved from a precalculated logarithm table that provides a fixed precision.Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently. Using look-up tables, CORDIC-like methods can be used to compute logarithms if the only available operations are addition and bit shifts. Moreover, the binary logarithm algorithm calculates lb(x) recursively based on repeated squarings of x, taking advantage of the relation
- Taylor series
This is a shorthand for saying that ln(z) can be approximated to a more and more accurate value by the following expressions:
For example, with z = 1.5 the third approximation yields 0.4167, which is about 0.011 greater than ln(1.5) = 0.405465. This series approximates ln(z) with arbitrary precision, provided the number of summands is large enough. In elementary calculus, ln(z) is therefore the limit of this series. It is the Taylor series of the natural logarithm at z = 1. The Taylor series of ln z provides a particularly useful approximation to ln(1+z) when z is small, |z| << 1, since then
For example, with z = 0.1 the first-order approximation gives ln(1.1) ≈ 0.1, which is less than 5% off the correct value 0.0953.
- More efficient series
Another series is based on the area hyperbolic tangent function:
This series can be derived from the above Taylor series. It converges more quickly than the Taylor series, especially if z is close to 1. For example, for z = 1.5, the first three terms of the second series approximate ln(1.5) with an error of about 3×10−6. The quick convergence for z close to 1 can be taken advantage of in the following way: given a low-accuracy approximation y ≈ ln(z) and putting
the logarithm of z is:
The better the initial approximation y is, the closer A is to 1, so its logarithm can be calculated efficiently. A can be calculated using the exponential series, which converges quickly provided y is not too large. Calculating the logarithm of larger z can be reduced to smaller values of z by writing z = a · 10b, so that ln(z) = ln(a) + b · ln(10).
A closely related method can be used to compute the logarithm of integers. From the above series, it follows that:
If the logarithm of a large integer n is known, then this series yields a fast converging series for log(n+1).
Arithmetic-geometric mean approximation
The arithmetic-geometric mean yields high precision approximations of the natural logarithm. ln(x) is approximated to a precision of 2−p (or p precise bits) by the following formula (due to Carl Friedrich Gauss):
Here M denotes the arithmetic-geometric mean. It is obtained by repeatedly calculating the average (arithmetic mean) and the square root of the product of two numbers (geometric mean). Moreover, m is chosen such that
Both the arithmetic-geometric mean and the constants π and ln(2) can be calculated with quickly converging series.
Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of scale invariance. For example, each chamber of the shell of a nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a logarithmic spiral.Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function log(x) grows very slowly for large x, logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation.
Scientific quantities are often expressed as logarithms of other quantities, using a logarithmic scale. For example, the decibel is a logarithmic unit of measurement. It is based on the common logarithm of ratios—10 times the common logarithm of a power ratio or 20 times the common logarithm of a voltage ratio. It is used to quantify the loss of voltage levels in transmitting electrical signals, to describe power levels of sounds in acoustics, and the absorbance of light in the fields of spectrometry and optics. The signal-to-noise ratio describing the amount of unwanted noise in relation to a (meaningful) signal is also measured in decibels. In a similar vein, the peak signal-to-noise ratio is commonly used to assess the quality of sound and image compression methods using the logarithm.
The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the moment magnitude scale or the Richter scale. For example, a 5.0 earthquake releases 10 times and a 6.0 releases 100 times the energy of a 4.0. Another logarithmic scale is apparent magnitude. It measures the brightness of stars logarithmically. Yet another example is pH in chemistry; pH is the negative of the common logarithm of the activity of hydronium ions (the form hydrogen ions H+
take in water). The activity of hydronium ions in neutral water is 10−7 mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about 10−3 mol·L−1.
Semilog (log-linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, exponential functions of the form f(x) = a · bx appear as straight lines with slope equal to the logarithm of b. Log-log graphs scale both axes logarithmically, which causes functions of the form f(x) = a · xk to be depicted as straight lines with slope equal to the exponent k. This is applied in visualizing and analyzing power laws.
Logarithms occur in several laws describing human perception:Hick's law proposes a logarithmic relation between the time individuals take for choosing an alternative and the number of choices they have.Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the distance to and the size of the target. In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less precise than more recent models, such as the Stevens' power law.)
Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10x as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly.
Probability theory and statistics
Logarithms arise in probability theory: the law of large numbers dictates that, for a fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads approaches one-half. The fluctuations of this proportion about one-half are described by the law of the iterated logarithm.
Logarithms also occur in log-normal distributions. When the logarithm of a random variable has a normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence.
Logarithms are used for maximum-likelihood estimation of parametric statistical models. For such a model, the likelihood function depends on at least one parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables.
Benford's law describes the occurrence of digits in many data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is d (from 1 to 9) equals log10(d + 1) − log10(d), regardless of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting.
Analysis of algorithms is a branch of computer science that studies the performance of algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that divide a problem into smaller ones, and join the solutions of the subproblems.
For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, log2(N) comparisons, where N is the list's length. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time approximately proportional to N · log(N). The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor, is usually disregarded in the analysis of algorithms under the standard uniform cost model.
A function f(x) is said to grow logarithmically if f(x) is (exactly or approximately) proportional to the logarithm of x. (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any natural number N can be represented in binary form in no more than log2(N) + 1 bits. In other words, the amount of memory needed to store N grows logarithmically with N.
Entropy and chaos
The sum is over all possible states i of the system in question, such as the positions of gas particles in a container. Moreover, pi is the probability that the state i is attained and k is the Boltzmann constant. Similarly, entropy in information theory measures the quantity of information. If a message recipient may expect any one of N possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as log2(N) bits.
Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are chaotic in a deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive.
Logarithms occur in definitions of the dimension of fractals. Fractals are geometric objects that are self-similar: small parts reproduce, at least roughly, the entire global structure. The Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the Hausdorff dimension of this structure log(3)/log(2) ≈ 1.58. Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question.
Logarithms are related to musical tones and intervals. In equal temperament, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual tones. For example, the note A has a frequency of 440 Hz and B-flat has a frequency of 466 Hz. The interval between A and B-flat is a semitone, as is the one between B-flat and B (frequency 493 Hz). Accordingly, the frequency ratios agree:
Therefore, logarithms can be used to describe the intervals: an interval is measured in semitones by taking the base-21/12 logarithm of the frequency ratio, while the base-21/1200 logarithm of the frequency ratio expresses the interval in cents, hundredths of a semitone. The latter is used for finer encoding, as it is needed for non-equal temperaments.
(the two tones are played at the same time)
|1/12 tone play (help·info)||Semitone play||Just major third play||Major third play||Tritone play||Octave play|
|Frequency ratio r|
|Corresponding number of semitones
|Corresponding number of cents
Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in number theory. For any integer x, the quantity of prime numbers less than or equal to x is denoted π(x). The prime number theorem asserts that π(x) is approximately given by
in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and x is prime is inversely proportional to the numbers of decimal digits of x. A far better estimate of π(x) is given by the offset logarithmic integral function Li(x), defined by
The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing π(x) and Li(x). The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm.
The logarithm of n factorial, n! = 1 · 2 · ... · n, is given by
The complex numbers a solving the equation
are called complex logarithms. Here, z is a complex number. A complex number is commonly represented as z = x + iy, where x and y are real numbers and i is the imaginary unit. Such a number can be visualized by a point in the complex plane, as shown at the right. The polar form encodes a non-zero complex number z by its absolute value, that is, the distance r to the origin, and an angle between the x axis and the line passing through the origin and z. This angle is called the argument of z. The absolute value r of z is
The argument is not uniquely specified by z: both φ and φ' = φ + 2π are arguments of z because adding 2π radians or 360 degrees[nb 6] to φ corresponds to "winding" around the origin counter-clock-wise by a turn. The resulting complex number is again z, as illustrated at the right. However, exactly one argument φ satisfies −π < φ and φ ≤ π. It is called the principal argument, denoted Arg(z), with a capital A. (An alternative normalization is 0 ≤ Arg(z) < 2π.)
This implies that the a-th power of e equals z, where
φ is the principal argument Arg(z) and n is an arbitrary integer. Any such a is called a complex logarithm of z. There are infinitely many of them, in contrast to the uniquely defined real logarithm. If n = 0, a is called the principal value of the logarithm, denoted Log(z). The principal argument of any positive real number x is 0; hence Log(x) is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers do not generalize to the principal value of the complex logarithm.
The illustration at the right depicts Log(z). The discontinuity, that is, the jump in the hue at the negative part of the x- or real axis, is caused by the jump of the principal argument there. This locus is called a branch cut. This behavior can only be circumvented by dropping the range restriction on φ. Then the argument of z and, consequently, its logarithm become multi-valued functions.
Inverses of other exponential functions
Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential. Another example is the p-adic logarithm, the inverse function of the p-adic exponential. Both are defined via Taylor series analogous to the real case. In the context of differential geometry, the exponential map maps the tangent space at a point of a manifold to a neighborhood of that point. Its inverse is also called the logarithmic (or log) map.
where x is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a routine that allows secure exchanges of cryptographic keys over unsecured information channels.Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field.
Further logarithm-like inverse functions include the double logarithm ln(ln(x)), the super- or hyper-4-logarithm (a slight variation of which is called iterated logarithm in computer science), the Lambert W function, and the logit. They are the inverse functions of the double exponential function, tetration, of f(w) = wew, and of the logistic function, respectively.
From the perspective of pure mathematics, the identity log(cd) = log(c) + log(d) expresses a group isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups. By means of that isomorphism, the Haar measure (Lebesgue measure) dx on the reals corresponds to the Haar measure dx/x on the positive reals. In complex analysis and algebraic geometry, differential forms of the form df/f are known as forms with logarithmic poles.
The polylogarithm is the function defined by
- The restrictions on x and b are explained in the section "Analytic properties".
- Some mathematicians disapprove of this notation. In his 1985 autobiography, Paul Halmos criticized what he considered the "childish ln notation," which he said no mathematician had ever used. The notation was invented by Irving Stringham, a mathematician.
- For example C, Java, Haskell, and BASIC.
- The same series holds for the principal value of the complex logarithm for complex numbers z satisfying |z − 1| < 1.
- The same series holds for the principal value of the complex logarithm for complex numbers z with positive real part.
- See radian for the conversion between 2π and 360 degrees.
- Shirali, Shailesh (2002), A Primer on Logarithms, Hyderabad: Universities Press, ISBN 978-81-7371-414-6, esp. section 2
- Kate, S.K.; Bhapkar, H.R. (2009), Basics Of Mathematics, Pune: Technical Publications, ISBN 978-81-8431-755-8, chapter 1
- All statements in this section can be found in Shailesh Shirali 2002, section 4, (Douglas Downing 2003, p. 275), or Kate & Bhapkar 2009, p. 1-1, for example.
- Bernstein, Stephen; Bernstein, Ruth (1999), Schaum's outline of theory and problems of elements of statistics. I, Descriptive statistics and probability, Schaum's outline series, New York: McGraw-Hill, ISBN 978-0-07-005023-5, p. 21
- Downing, Douglas (2003), Algebra the Easy Way, Barron's Educational Series, Hauppauge, N.Y.: Barron's, ISBN 978-0-7641-1972-9, chapter 17, p. 275
- Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-21045-0, p. 20
- Franz Embacher; Petra Oberhuemer, Mathematisches Lexikon (in German), mathe online: für Schule, Fachhochschule, Universität unde Selbststudium, retrieved 22/03/2011
- B. N. Taylor (1995), Guide for the Use of the International System of Units (SI), US Department of Commerce
- Gullberg, Jan (1997), Mathematics: from the birth of numbers., New York: W. W. Norton & Co, ISBN 978-0-393-04002-9
- Paul Halmos (1985), I Want to Be a Mathematician: An Automathography, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96078-4
- Irving Stringham (1893), Uniplanar algebra: being part I of a propædeutic to the higher mathematical analysis, The Berkeley Press, p. xiii
- Roy S. Freedman (2006), Introduction to Financial Technology, Amsterdam: Academic Press, p. 59, ISBN 978-0-12-370478-8
- McFarland, David (2007), Quarter Tables Revisited: Earlier Tables, Division of Labor in Table Construction, and Later Implementations in Analog Computers, p. 1
- Robson, Eleanor (2008). Mathematics in Ancient Iraq: A Social History. p. 227. ISBN 978-0691091822.
- Gupta, R. C. (2000), "History of Mathematics in India", in Hoiberg, Dale; Ramchandani, Indu, Students' Britannica India: Select essays, Popular Prakashan, p. 329
- Stifelio, Michaele (1544), Arithmetica Integra, London: Iohan Petreium
- Bukhshtab, A.A.; Pechaev, V.I. (2001), "Arithmetic", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Vivian Shaw Groza and Susanne M. Shelley (1972), Precalculus mathematics, New York: Holt, Rinehart and Winston, p. 182, ISBN 978-0-03-077670-0
- Ernest William Hobson (1914), John Napier and the invention of logarithms, 1614, Cambridge: The University Press
- Boyer 1991, Chapter 14, section "Jobst Bürgi"
- Gladstone-Millar, Lynne (2003), John Napier: Logarithm John, National Museums Of Scotland, ISBN 978-1-901663-70-9, p. 44
- Napier, Mark (1834), Memoirs of John Napier of Merchiston, Edinburgh: William Blackwood, p. 392.
- William Harrison De Puy (1893), The Encyclopædia Britannica: a dictionary of arts, sciences, and general literature ; the R.S. Peale reprint, 17 (9th ed.), Werner Co., p. 179
- Maor, Eli (2009), e: The Story of a Number, Princeton University Press, ISBN 978-0-691-14134-3, section 2
- J. J. O'Connor; E. F. Robertson (2001-09), The number e, The MacTutor History of Mathematics archive, retrieved 02/02/2009
- Cajori, Florian (1991), A History of Mathematics (5th ed.), Providence, RI: AMS Bookstore, ISBN 978-0-8218-2102-2, p. 152
- Maor 2009, sections 1, 13
- Eves, Howard Whitley (1992), An introduction to the history of mathematics, The Saunders series (6th ed.), Philadelphia: Saunders, ISBN 978-0-03-029558-4, section 9-3
- Boyer, Carl B. (1991), A History of Mathematics, New York: John Wiley & Sons, ISBN 978-0-471-54397-8, p. 484, 489
- Bryant, Walter W., A History of Astronomy, London: Methuen & Co, p. 44
- Campbell-Kelly, Martin (2003), The history of mathematical tables: from Sumer to spreadsheets, Oxford scholarship online, Oxford University Press, ISBN 978-0-19-850841-0, section 2
- Abramowitz, Milton; Stegun, Irene A., eds. (1972), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (10th ed.), New York: Dover Publications, ISBN 978-0-486-61272-0, section 4.7., p. 89
- Spiegel, Murray R.; Moyer, R.E. (2006), Schaum's outline of college algebra, Schaum's outline series, New York: McGraw-Hill, ISBN 978-0-07-145227-4, p. 264
- Devlin, Keith (2004). Sets, functions, and logic: an introduction to abstract mathematics. Chapman & Hall/CRC mathematics (3rd ed.). Boca Raton, Fla: Chapman & Hall/CRC. ISBN 978-1-58488-449-1.[verification needed], or see the references in function
- Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94841-6, MR 1476913, section III.3
- Lang 1997, section IV.2
- Stewart, James (2007), Single Variable Calculus: Early Transcendentals, Belmont: Thomson Brooks/Cole, ISBN 978-0-495-01169-9, section 1.6
- "Calculation of d/dx(Log(b,x))". Wolfram Alpha. Wolfram Research. Retrieved 15 March 2011.
- Kline, Morris (1998), Calculus: an intuitive and physical approach, Dover books on mathematics, New York: Dover Publications, ISBN 978-0-486-40453-0, p. 386
- "Calculation of Integrate(ln(x))". Wolfram Alpha. Wolfram Research. Retrieved 15 March 2011.
- Abramowitz & Stegun, eds. 1972, p. 69
- Courant, Richard (1988), Differential and integral calculus. Vol. I, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-60842-4, MR 1009558, section III.6
- Havil, Julian (2003), Gamma: Exploring Euler's Constant, Princeton University Press, ISBN 978-0-691-09983-5, sections 11.5 and 13.8
- Nomizu, Katsumi (1996), Selected papers on number theory and algebraic geometry 172, Providence, RI: AMS Bookstore, p. 21, ISBN 978-0-8218-0445-2
- Baker, Alan (1975), Transcendental number theory, Cambridge University Press, ISBN 978-0-521-20461-3, p. 10
- Muller, Jean-Michel (2006), Elementary functions (2nd ed.), Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-4372-0, sections 4.2.2 (p. 72) and 5.5.2 (p. 95)
- Hart, Cheney, Lawson et al. (1968), Computer Approximations, SIAM Series in Applied Mathematics, New York: John Wiley, section 6.3, p. 105–111
- Zhang, M.; Delgado-Frias, J.G.; Vassiliadis, S. (1994), "Table driven Newton scheme for high precision logarithm generation", IEE Proceedings Computers & Digital Techniques 141 (5): 281–292, doi:10.1049/ip-cdt:19941268, ISSN 1350-387, section 1 for an overview
- Meggitt, J. E. (April 1962), "Pseudo Division and Pseudo Multiplication Processes", IBM Journal, doi:10.1147/rd.62.0210
- Kahan, W. (May 20, 2001), Pseudo-Division Algorithms for Floating-Point Logarithms and Exponentials
- Abramowitz & Stegun, eds. 1972, p. 68
- Sasaki, T.; Kanada, Y. (1982), "Practically fast multiple-precision evaluation of log(x)", Journal of Information Processing 5 (4): 247–250, retrieved 30 March 2011
- Ahrendt, Timm (1999), Fast computations of the exponential function, Lecture notes in computer science 1564, Berlin, New York: Springer, pp. 302–312, doi:10.1007/3-540-49116-3_28
- Maor 2009, p. 135
- Frey, Bruce (2006), Statistics hacks, Hacks Series, Sebastopol, CA: O'Reilly, ISBN 978-0-596-10164-0, chapter 6, section 64
- Ricciardi, Luigi M. (1990), Lectures in applied mathematics and informatics, Manchester: Manchester University Press, ISBN 978-0-7190-2671-3, p. 21, section 1.3.2
- Bakshi, U. A. (2009), Telecommunication Engineering, Pune: Technical Publications, ISBN 978-81-8431-725-1, section 5.2
- Maling, George C. (2007), "Noise", in Rossing, Thomas D., Springer handbook of acoustics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-30446-5, section 23.0.2
- Tashev, Ivan Jelev (2009), Sound Capture and Processing: Practical Approaches, New York: John Wiley & Sons, ISBN 978-0-470-31983-3, p. 48
- Chui, C.K. (1997), Wavelets: a mathematical tool for signal processing, SIAM monographs on mathematical modeling and computation, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-384-8, p. 180
- Crauder, Bruce; Evans, Benny; Noell, Alan (2008), Functions and Change: A Modeling Approach to College Algebra (4th ed.), Boston: Cengage Learning, ISBN 978-0-547-15669-9, section 4.4.
- Bradt, Hale (2004), Astronomy methods: a physical approach to astronomical observations, Cambridge Planetary Science, Cambridge University Press, ISBN 978-0-521-53551-9, section 8.3, p. 231
- IUPAC (1997), A. D. McNaught, A. Wilkinson, ed., Compendium of Chemical Terminology ("Gold Book") (2nd ed.), Oxford: Blackwell Scientific Publications, doi:10.1351/goldbook, ISBN 978-0-9678550-9-7
- Bird, J. O. (2001), Newnes engineering mathematics pocket book (3rd ed.), Oxford: Newnes, ISBN 978-0-7506-4992-6, section 34
- Goldstein, E. Bruce (2009), Encyclopedia of Perception, Encyclopedia of Perception, Thousand Oaks, CA: Sage, ISBN 978-1-4129-4081-8, p. 355–356
- Matthews, Gerald (2000), Human performance: cognition, stress, and individual differences, Human Performance: Cognition, Stress, and Individual Differences, Hove: Psychology Press, ISBN 978-0-415-04406-6, p. 48
- Welford, A. T. (1968), Fundamentals of skill, London: Methuen, ISBN 978-0-416-03000-6, OCLC 219156, p. 61
- Paul M. Fitts (June 1954), "The information capacity of the human motor system in controlling the amplitude of movement", Journal of Experimental Psychology 47 (6): 381–391, doi:10.1037/h0055392, PMID 13174710, reprinted in Paul M. Fitts (1992), "The information capacity of the human motor system in controlling the amplitude of movement" (PDF), Journal of Experimental Psychology: General 121 (3): 262–269, doi:10.1037/0096-34188.8.131.522, PMID 1402698, retrieved 30 March 2011
- Banerjee, J. C. (1994), Encyclopaedic dictionary of psychological terms, New Delhi: M.D. Publications, ISBN 978-81-85880-28-0, OCLC 33860167, p. 304
- Nadel, Lynn (2005), Encyclopedia of cognitive science, New York: John Wiley & Sons, ISBN 978-0-470-01619-0, lemmas Psychophysics and Perception: Overview
- Siegler, Robert S.; Opfer, John E. (2003), "The Development of Numerical Estimation. Evidence for Multiple Representations of Numerical Quantity", Psychological Science 14 (3): 237–43, doi:10.1111/1467-9280.02438, PMID 12741747
- Dehaene, Stanislas; Izard, Véronique; Spelke, Elizabeth; Pica, Pierre (2008), "Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures", Science 320 (5880): 1217–1220, doi:10.1126/science.1156540, PMC 2610411, PMID 18511690
- Breiman, Leo (1992), Probability, Classics in applied mathematics, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-296-4, section 12.9
- Aitchison, J.; Brown, J. A. C. (1969), The lognormal distribution, Cambridge University Press, ISBN 978-0-521-04011-2, OCLC 301100935
- Jean Mathieu and Julian Scott (2000), An introduction to turbulent flow, Cambridge University Press, p. 50, ISBN 978-0-521-77538-0
- Rose, Colin; Smith, Murray D. (2002), Mathematical statistics with Mathematica, Springer texts in statistics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-95234-5, section 11.3
- Tabachnikov, Serge (2005), Geometry and Billiards, Providence, R.I.: American Mathematical Society, pp. 36–40, ISBN 978-0-8218-3919-5, section 2.1
- Durtschi, Cindy; Hillison, William; Pacini, Carl (2004), "The Effective Use of Benford's Law in Detecting Fraud in Accounting Data", Journal of Forensic Accounting V: 17–34
- Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-21045-0, pages 1-2
- Harel, David; Feldman, Yishai A. (2004), Algorithmics: the spirit of computing, New York: Addison-Wesley, ISBN 978-0-321-11784-7, p. 143
- Knuth, Donald (1998), The Art of Computer Programming, Reading, Mass.: Addison-Wesley, ISBN 978-0-201-89685-5, section 6.2.1, pp. 409–426
- Donald Knuth 1998, section 5.2.4, pp. 158–168
- Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, p. 20, ISBN 978-3-540-21045-0
- Mohr, Hans; Schopfer, Peter (1995), Plant physiology, Berlin, New York: Springer-Verlag, ISBN 978-3-540-58016-4, chapter 19, p. 298
- Eco, Umberto (1989), The open work, Harvard University Press, ISBN 978-0-674-63976-8, section III.I
- Sprott, Julien Clinton (2010), Elegant Chaos: Algebraically Simple Chaotic Flows, New Jersey: World Scientific, ISBN 978-981-283-881-0, section 1.9
- Helmberg, Gilbert (2007), Getting acquainted with fractals, De Gruyter Textbook, Berlin, New York: Walter de Gruyter, ISBN 978-3-11-019092-2
- Wright, David (2009), Mathematics and music, Providence, RI: AMS Bookstore, ISBN 978-0-8218-4873-9, chapter 5
- Bateman, P. T.; Diamond, Harold G. (2004), Analytic number theory: an introductory course, New Jersey: World Scientific, ISBN 978-981-256-080-3, OCLC 492669517, theorem 4.1
- P. T. Bateman & Diamond 2004, Theorem 8.15
- Slomson, Alan B. (1991), An introduction to combinatorics, London: CRC Press, ISBN 978-0-412-35370-3, chapter 4
- Ganguly, S. (2005), Elements of Complex Analysis, Kolkata: Academic Publishers, ISBN 978-81-87504-86-3, Definition 1.6.3
- Nevanlinna, Rolf Herman; Paatero, Veikko (2007), Introduction to complex analysis, Providence, RI: AMS Bookstore, ISBN 978-0-8218-4399-4, section 5.9
- Moore, Theral Orvis; Hadlock, Edwin H. (1991), Complex analysis, Singapore: World Scientific, ISBN 978-981-02-0246-0, section 1.2
- Wilde, Ivan Francis (2006), Lecture notes on complex analysis, London: Imperial College Press, ISBN 978-1-86094-642-4, theorem 6.1.
- Higham, Nicholas (2008), Functions of Matrices. Theory and Computation, Philadelphia, PA: SIAM, ISBN 978-0-89871-646-7, chapter 11.
- Neukirch, Jürgen (1999), Algebraic Number Theory, Grundlehren der mathematischen Wissenschaften 322, Berlin: Springer-Verlag, ISBN 978-3-540-65399-8, Zbl 0956.11021, MR1697859, section II.5.
- Hancock, Edwin R.; Martin, Ralph R.; Sabin, Malcolm A. (2009), Mathematics of Surfaces XIII: 13th IMA International Conference York, UK, September 7–9, 2009 Proceedings, Springer, p. 379, ISBN 978-3-642-03595-1
- Stinson, Douglas Robert (2006), Cryptography: Theory and Practice (3rd ed.), London: CRC Press, ISBN 978-1-58488-508-5
- Lidl, Rudolf; Niederreiter, Harald (1997), Finite fields, Cambridge University Press, ISBN 978-0-521-39231-0
- Corless, R.; Gonnet, G.; Hare, D.; Jeffrey, D.; Knuth, Donald (1996), "On the Lambert W function", Advances in Computational Mathematics (Berlin, New York: Springer-Verlag) 5: 329–359, doi:10.1007/BF02124750, ISSN 1019-7168
- Cherkassky, Vladimir; Cherkassky, Vladimir S.; Mulier, Filip (2007), Learning from data: concepts, theory, and methods, Wiley series on adaptive and learning systems for signal processing, communications, and control, New York: John Wiley & Sons, ISBN 978-0-471-68182-3, p. 357
- Bourbaki, Nicolas (1998), General topology. Chapters 5—10, Elements of Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64563-4, MR 1726872, section V.4.1
- Ambartzumian, R. V. (1990), Factorization calculus and geometric probability, Cambridge University Press, ISBN 978-0-521-34535-4, section 1.4
- Esnault, Hélène; Viehweg, Eckart (1992), Lectures on vanishing theorems, DMV Seminar 20, Basel, Boston: Birkhäuser Verlag, ISBN 978-3-7643-2822-1, MR 1193913, section 2
- Apostol, T.M. (2010), "Logarithm", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F. et al., NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0521192255, MR2723248
|Look up logarithm in Wiktionary, the free dictionary.|
Media related to Logarithm at Wikimedia Commons
- Khan Academy: Logarithms, free online micro lectures
- Hazewinkel, Michiel, ed. (2001), "Logarithmic function", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Colin Byfleet, Educational video on logarithms, retrieved 12/10/2010
- Edward Wright, Translation of Napier's work on logarithms, retrieved 12/10/2010
Read in another language
This page is available in 80 languages
- Bahasa Banjar
- Беларуская (тарашкевіца)
- Fiji Hindi
- Bahasa Indonesia
- Bahasa Melayu
- Norsk bokmål
- Norsk nynorsk
- Simple English
- Српски / srpski
- Srpskohrvatski / српскохрватски
- Tiếng Việt | http://en.m.wikipedia.org/wiki/Logarithm | 13 |
19 | In this next post I will present the definition and anatomy of a basic argument. This will cover premises and the conditions which make them true. Following from this will be an explanation of conclusions and how they bring the argument together.
What is an argument?
An argument in this case is a form of proof for a point of view. While the argument between theists and atheists might be heated and emotionally driven in some cases. The atheist (and many theists) believe that breaking things down to a logical analysis is one of the best ways to learn the truth of something. In philosophy there are many forms of arguments and different styles of setting out premises and conclusions. What follows is the most basic understanding of what an argument consists of.
Premises are the basis on which an argument is formed. A premise is a statement or an observation which should logically justify a conclusion.
In order to properly justify a conclusion and produce a sound argument the premises you state must be true. This is because of the following general rule;
if the premises are true then the conclusion MUST be true
So, what is 'true' for the sake of rational argument.
Truth is verifiable. Any statement that is said to be true must be consistent and testable. It must have conditions that would allow it to be proven false. It must be repeatedly observable and objective.
Conclusions are the final statement in an argument and should be designed to take into account the premises which you have set. A conclusion must be logically justified on the basis of your premises. The truth and validity of the conclusion is directly related to the quality of the premises.
The components I've presented have quite specific roles in an argument. Being aware of what makes a good argument contributes to the quality of the discussion you conduct. You will be able to form better arguments as well as show your opponents where their arguments are failing.
Further reading can begin here; a more in-depth run through of arguments. http://www.iep.utm.edu/argument/ | http://bethecog.blogspot.com/2012/01/how-to-argue-with-atheist-2-anatomy-of.html | 13 |
113 | A syllogism (Greek: συλλογισμός – syllogismos – "conclusion," "inference") is a kind of logical argument in which one proposition (the conclusion) is inferred from two or more others (the premises) of a specific form. In antiquity, two rival theories of the syllogism existed: Aristotelian syllogistic and Stoic syllogistic.
Aristotle defines the syllogism as "a discourse in which certain (specific) things having been supposed, something different from the things supposed results of necessity because these things are so." Despite this very general definition, Aristotle limits himself to categorical syllogisms which consist of three categorical propositions in his work Prior Analytics. These included categorical modal syllogisms.
From the Middle Ages onwards, "categorical syllogism" and "syllogism" were mostly used interchangeably, and the present article is concerned with this traditional use of "syllogism" only. The syllogism was at the core of traditional deductive reasoning, where facts are determined by combining existing statements, in contrast to inductive reasoning where facts are determined by repeated observations.
Within academic contexts, the syllogism was superseded by first-order predicate logic following the work of Gottlob Frege, in particular his Begriffsschrift (Concept Script) (1879), but syllogisms remain useful in some circumstances, and for general-audience introductions to logic.
A categorical syllogism consists of three parts:
- Major premise
- Minor premise
Each part is a categorical proposition, and each categorical proposition contains two categorical terms. In Aristotle, each of the premises is in the form "All A are B," "Some A are B", "No A are B" or "Some A are not B", where "A" is one term and "B" is another. "All A are B," and "No A are B" are termed universal propositions; "Some A are B" and "Some A are not B" are termed particular propositions. More modern logicians allow some variation. Each of the premises has one term in common with the conclusion: in a major premise, this is the major term (i.e., the predicate of the conclusion); in a minor premise, it is the minor term (the subject) of the conclusion. For example:
- Major premise: All humans are mortal.
- Minor premise: All Greeks are humans.
- Conclusion: All Greeks are mortal.
Each of the three distinct terms represents a category. In the above example, humans, mortal, and Greeks. Mortal is the major term, Greeks the minor term. The premises also have one term in common with each other, which is known as the middle term; in this example, humans. Both of the premises are universal, as is the conclusion.
- Major premise: All mortals die.
- Minor premise: Some mortals are men.
- Conclusion: Some men die.
Here, the major term is die, the minor term is men, and the middle term is mortals. The major premise is universal; the minor premise and the conclusion are particular.
A sorites is a form of argument in which a series of incomplete syllogisms is so arranged that the predicate of each premise forms the subject of the next until the subject of the first is joined with the predicate of the last in the conclusion. For example, if one argues that a given number of grains of sand does not make a heap and that an additional grain does not either, then to conclude that no additional amount of sand would make a heap is to construct a sorites argument.
Types of syllogism
There are infinitely many possible syllogisms, but only a finite number of logically distinct types, which we classify and enumerate below. Note that the syllogism above has the abstract form:
- Major premise: All M are P.
- Minor premise: All S are M.
- Conclusion: All S are P.
(Note: M – Middle, S – subject, P – predicate. See below for more detailed explanation.)
The premises and conclusion of a syllogism can be any of four types, which are labeled by letters as follows. The meaning of the letters is given by the table:
|a||All||S||are||P||universal affirmatives||All humans are mortal.|
|e||No||S||are||P||universal negatives||No humans are perfect.|
|i||Some||S||are||P||particular affirmatives||Some humans are healthy.|
|o||Some||S||are not||P||particular negatives||Some humans are not clever.|
In Analytics, Aristotle mostly uses the letters A, B and C (actually, the Greek letters alpha, beta and gamma) as term place holders, rather than giving concrete examples, an innovation at the time. It is traditional to use is rather than are as the copula, hence All A is B rather than All As are Bs. It is traditional and convenient practice to use a, e, i, o as infix operators to enable the categorical statements to be written succinctly thus:
|All A is B||AaB|
|No A is B||AeB|
|Some A is B||AiB|
|Some A is not B||AoB|
The letter S is the subject of the conclusion, P is the predicate of the conclusion, and M is the middle term. The major premise links M with P and the minor premise links M with S. However, the middle term can be either the subject or the predicate of each premise where it appears. The differing positions of the major, minor, and middle terms gives rise to another classification of syllogisms known as the figure. Given that in each case the conclusion is S-P, the four figures are:
|Figure 1||Figure 2||Figure 3||Figure 4|
(Note, however, that, following Aristotle's treatment of the figures, some logicians—e.g., Peter Abelard and John Buridan—reject the fourth figure as a figure distinct from the first. See entry on the Prior Analytics.)
Putting it all together, there are 256 possible types of syllogisms (or 512 if the order of the major and minor premises is changed, though this makes no difference logically). Each premise and the conclusion can be of type A, E, I or O, and the syllogism can be any of the four figures. A syllogism can be described briefly by giving the letters for the premises and conclusion followed by the number for the figure. For example, the syllogism BARBARA above is AAA-1, or "A-A-A in the first figure".
The vast majority of the 256 possible forms of syllogism are invalid (the conclusion does not follow logically from the premises). The table below shows the valid forms. Even some of these are sometimes considered to commit the existential fallacy, meaning they are invalid if they mention an empty category. These controversial patterns are marked in italics.
|Figure 1||Figure 2||Figure 3||Figure 4|
Next to each premise and conclusion is a shorthand description of the sentence. So in AAI-3, the premise "All squares are rectangles" becomes "MaP"; the symbols mean that the first term ("square") is the middle term, the second term ("rectangle") is the predicate of the conclusion, and the relationship between the two terms is labeled "a" (All M are P).
The following table shows all syllogisms that are essentially different. The similar syllogisms share actually the same premises, just written in a different way. For example "Some pets are kittens" (SiM in Darii) could also be written as "Some kittens are pets" (MiS is Datisi).
In the Venn diagrams, the black areas indicate no elements, and the red areas indicate at least one element.
- All men are mortal. (MaP)
- All Greeks are men. (SaM)
- ∴ All Greeks are mortal. (SaP)
Similar: Cesare (EAE-2)
- No reptiles have fur. (MeP)
- All snakes are reptiles. (SaM)
- ∴ No snakes have fur. (SeP)
Calemes is like Celarent with S and P exchanged.
Similar: Datisi (AII-3)
- All rabbits have fur. (MaP)
- Some pets are rabbits. (SiM)
- ∴ Some pets have fur. (SiP)
Dimatis is like Darii with S and P exchanged.
Similar: Festino (EIO-2), Ferison (EIO-3), Fresison (EIO-4)
- No homework is fun. (MeP)
- Some reading is homework. (SiM)
- ∴ Some reading is not fun. (SoP)
- All informative things are useful. (PaM)
- Some websites are not useful. (SoM)
- ∴ Some websites are not informative. (SoP)
- Some cats have no tails. (MoP)
- All cats are mammals. (MaS)
- ∴ Some mammals have no tails. (SoP)
- All men are mortal. (MaP)
- All Greeks are men. (SaM)
- ∴ Some Greeks are mortal. (SiP)
Bamalip is like Barbari with S and P exchanged:
Similar: Cesaro (EAO-2)
- No reptiles have fur. (MeP)
- All snakes are reptiles. (SaM)
- ∴ Some snakes have no fur. (SoP)
Similar: Calemos (AEO-4)
- All horses have hooves. (PaM)
- No humans have hooves. (SeM)
- ∴ Some humans are not horses. (SoP)
Similar: Fesapo (EAO-4)
- No flowers are animals. (MeP)
- All flowers are plants. (MaS)
- ∴ Some plants are not animals. (SoP)
Table of all syllogisms
This table shows all 24 valid syllogisms, represented by Venn diagrams.
(9 of them, on the right side of the table, require that one category must not be empty.)
Syllogisms of the same type are in the same row, and very similar syllogisms are in the same column.
Terms in syllogism
We may, with Aristotle, distinguish singular terms such as Socrates and general terms such as Greeks. Aristotle further distinguished (a) terms that could be the subject of predication, and (b) terms that could be predicated of others by the use of the copula (is are). (Such a predication is known as a distributive as opposed to non-distributive as in Greeks are numerous. It is clear that Aristotle's syllogism works only for distributive predication for we cannot reason All Greeks are animals, animals are numerous, therefore All Greeks are numerous.) In Aristotle's view singular terms were of type (a) and general terms of type (b). Thus Men can be predicated of Socrates but Socrates cannot be predicated of anything. Therefore to enable a term to be interchangeable — that is to be either in the subject or predicate position of a proposition in a syllogism — the terms must be general terms, or categorical terms as they came to be called. Consequently the propositions of a syllogism should be categorical propositions (both terms general) and syllogisms employing just categorical terms came to be called categorical syllogisms.
It is clear that nothing would prevent a singular term occurring in a syllogism — so long as it was always in the subject position — however such a syllogism, even if valid, would not be a categorical syllogism. An example of such would be Socrates is a man, All men are mortal, therefore Socrates is mortal. Intuitively this is as valid as All Greeks are men, all men are mortal therefore all Greeks are mortals. To argue that its validity can be explained by the theory of syllogism it would be necessary to show that Socrates is a man is the equivalent of a categorical proposition. It can be argued Socrates is a man is equivalent to All that are identical to Socrates are men, so our non-categorical syllogism can be justified by use of the equivalence above and then citing BARBARA.[original research?]
||This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (January 2013)|
If a statement includes a term so that the statement is false if the term has no instances (is not instantiated) then the statement is said to entail existential import with respect to that term . In particular, a universal statement of the form All A is B has existential import with respect to A if All A is B is false if there are no As.
The following problems arise[original research?]:
- (a) In natural language and normal use, which statements of the forms All A is B, No A is B, Some A is B and Some A is not B have existential import and with respect to which terms?
- (b) In the four forms of categorical statements used in syllogism, which statements of the form AaB, AeB, AiB and AoB have existential import and with respect to which terms?
- (c) What existential imports must the forms AaB, AeB, AiB and AoB have for the square of opposition be valid?
- (d) What existential imports must the forms AaB, AeB, AiB and AoB have to preserve the validity of the traditionally valid forms of syllogisms?
- (e) Are the existential imports required to satisfy (d) above such that the normal uses in natural languages of the forms All A is B, No A is B, Some A is B and Some A is not B are intuitively and fairly reflected by the categorical statements of forms Ahab, Abe, Ail and Alb?
For example, if it is accepted that AiB is false if there are no As and AaB entails AiB, then AiB has existential import with respect to A, and so does AaB. Further, if it is accepted that AiB entails BiA, then AiB and AaB have existential import with respect to B as well. Similarly, if AoB is false if there are no As, and AeB entails AoB, and AeB entails BeA (which in turn entails BoA) then both AeB and AoB have existential import with respect to both A and B. It follows immediately that all universal categorical statements have existential import with respect to both terms. If AaB and AeB is a fair representation of the use of statements in normal natural language of All A is B and No A is B respectively, then the following example consequences arise:
- "All flying horses are mythological" is false if there are not flying horses.
- If "No men are fire-eating rabbits" is true, then "There are fire-eating rabbits" is false.
and so on.
If it is ruled that no universal statement has existential import then the square of opposition fails in several respects (e.g. AaB does not entail AiB) and a number of syllogisms are no longer valid (e.g. BaC,AaB->AiC).
These problems and paradoxes arise in both natural language statements and statements in syllogism form because of ambiguity, in particular ambiguity with respect to All. If "Fred claims all his books were Pulitzer Prize winners", is Fred claiming that he wrote any books? If not, then is what he claims true? Suppose Jane says none of her friends are poor; is that true if she has no friends? The first-order predicate calculus avoids the problems of such ambiguity by using formulae that carry no existential import with respect to universal statements; existential claims have to be explicitly stated. Thus natural language statements of the forms All A is B, No A is B, Some A is B and Some A is not B can be exactly represented in first order predicate calculus in which any existential import with respect to terms A and/or B is made explicitly or not made at all. Consequently the four forms AaB, AeB, AiB and AoB can be represented in first order predicate in every combination of existential import, so that it can establish which construal, if any, preserves the square of opposition and the validly of the traditionally valid syllogism. Strawson claims that such a construal is possible, but the results are such that, in his view, the answer to question (e) above is no.
Syllogism in the history of logic
The Aristotelian syllogism dominated Western philosophical thought from the 3rd Century to the 17th Century. At that time, Sir Francis Bacon rejected the idea of syllogism and deductive reasoning by asserting that it was fallible and illogical. Bacon offered a more inductive approach to logic in which experiments were conducted and axioms were drawn from the observations discovered in them.
In the 19th Century, modifications to syllogism were incorporated to deal with disjunctive ("A or B") and conditional ("if A then B") statements. Kant famously claimed, in Logic (1800), that logic was the one completed science, and that Aristotelian logic more or less included everything about logic there was to know. (This work is not necessarily representative of Kant's mature philosophy, which is often regarded as an innovation to logic itself.) Though there were alternative systems of logic such as Avicennian logic or Indian logic elsewhere, Kant's opinion stood unchallenged in the West until 1879 when Frege published his Begriffsschrift (Concept Script). This introduced a calculus, a method of representing categorical statements — and statements that are not provided for in syllogism as well — by the use of quantifiers and variables.
This led to the rapid development of sentential logic and first-order predicate logic, subsuming syllogistic reasoning, which was, therefore, after 2000 years, suddenly considered obsolete by many[original research?]. The Aristotelian system is explicated in modern fora of academia primarily in introductory material and historical study.
One notable exception to this modern relegation is the continued application of Aristotelian logic by officials of the Congregation for the Doctrine of the Faith, and the Apostolic Tribunal of the Roman Rota, which still requires that arguments crafted by Advocates be presented in syllogistic format.
People often make mistakes when reasoning syllogistically.
For instance, from the premises some A are B, some B are C, people tend to come to a definitive conclusion that therefore some A are C. However, this does not follow according to the rules of classical logic. For instance, while some cats (A) are black things (B), and some black things (B) are televisions (C), it does not follow from the parameters that some cats (A) are televisions (C). This is because first, the mood of the syllogism invoked is illicit (III), and second, the supposition of the middle term is variable between that of the middle term in the major premise, and that of the middle term in the minor premise (not all "some" cats are by necessity of logic the same "some black things").
Determining the validity of a syllogism involves determining the distribution of each term in each statement, meaning whether all members of that term are accounted for.
In simple syllogistic patterns, the fallacies of invalid patterns are:
- Undistributed middle: Neither of the premises accounts for all members of the middle term, which consequently fails to link the major and minor term.
- Illicit treatment of the major term: The conclusion implicates all members of the major term (P — meaning the proposition is negative); however, the major premise does not account for them all (i.e., P is either an affirmative predicate or a particular subject there).
- Illicit treatment of the minor term: Same as above, but for the minor term (S — meaning the proposition is universal) and minor premise (where S is either a particular subject or an affirmative predicate).
- Exclusive premises: Both premises are negative, meaning no link is established between the major and minor terms.
- Affirmative conclusion from a negative premise: If either premise is negative, the conclusion must also be.
- Negative conclusion from affirmative premises: If both premises are affirmative, the conclusion must also be.
- Existential fallacy: This is a more controversial one. If both premises are universal, i.e. "All" or "No" statements, one school of thought says they do not imply the existence of any members of the terms. In this case, the conclusion cannot be existential; i.e. beginning with "Some". Another school of thought says that affirmative statements (universal or particular) do imply the subject's existence, but negatives do not. A third school of thought says that the any type of proposition may or may not involve the subject's existence, and though this may condition the conclusion, it does not affect the form of the syllogism.[original research?]
- Buddhist logic
- Other types of syllogism:
- Syllogistic fallacy
- The False Subtlety of the Four Syllogistic Figures
- Venn diagram
- Michael Frede, "Stoic vs. Peripatetic Syllogistic", Archive for the History of Philosophy 56, 1975, 99-124.
- Aristotle, "Prior Analytics", 24b18–20
- Stanford Encyclopedia of Philosophy: Ancient Logic Aristotle Non-Modal Syllogistic
- Stanford Encyclopedia of Philosophy: Ancient Logic Aristotle Modal Logic
- Hurley, Patrick J (2011). A Concise Introduction to Logic, Cengage Learning, ISBN 9780840034175
- Zegarelli, Mark (2010). Logic for Dummies, John Wiley & Sons, ISBN 9781118053072
- "Philosophical Dictionary: Caird-Catharsis". Philosophypages.com. 2002-08-08. Retrieved 2009-12-14.
- According to Copi, p. 127: 'The letter names are presumed to come from the Latin words "AffIrmo" and "nEgO," which mean "I affirm" and "I deny," respectively; the first capitalized letter of each word is for universal, the second for particular'
- Bacon, Francis. The Great Instauration, 1620
- See, e.g., Evans, J. St. B. T (1989). Bias in human reasoning. London: LEA.
- See the meta-analysis by Khemlani, S. & Johnson-Laird, P.N. (2012). Theories of the syllogism: A meta-analysis. Psychological Bulletin, 138, 427-457.
- See the meta-analysis by Chater, N. & Oaksford, M. (1999). The Probability Heuristics Model of Syllogistic Reasoning. Cognitive Psychology, 38, 191–258.
- Aristotle, Prior Analytics. transl. Robin Smith (Hackett, 1989) ISBN 0-87220-064-7
- Blackburn, Simon, 1996. "Syllogism" in the Oxford Dictionary of Philosophy. Oxford University Press. ISBN 0-19-283134-8.
- Broadie, Alexander, 1993. Introduction to Medieval Logic. Oxford University Press. ISBN 0-19-824026-0.
- Irving Copi, 1969. Introduction to Logic, 3rd ed. Macmillan Company.
- John Corcoran (logician), 1972. Completeness of an ancient logic Journal of Symbolic Logic 37: 696–702.
- John Corcoran (logician), 1994. The founding of logic. Modern interpretations of Aristotle's logic Ancient Philosophy 14: 9–24.
- Hamblin, Charles L., 1970. Fallacies, Methuen : London, ISBN 0-416-70070-5. Cf. on validity of syllogisms: "A simple set of rules of validity was finally produced in the later Middle Ages, based on the concept of Distribution."
- Jan Łukasiewicz, 1987 (1957). Aristotle's Syllogistic from the Standpoint of Modern Formal Logic. New York: Garland Publishers. ISBN 0-8240-6924-2. OCLC 15015545.
- Patzig, Günter 1968. Aristotle's theory of the syllogism: a logico-philological study of Book A of the Prior Analytics. Reidel, Dordrecht.
- Smiley, Timothy 1973. What is a syllogism? Journal of Philosophical Logic 2: 136–154.
- Smith, Robin 1986. Immediate propositions and Aristotle's proof theory. Ancient Philosophy 6: 47–68.
- Aristotle's Logic entry by Robin Smith in the Stanford Encyclopedia of Philosophy
- The Traditional Square of Opposition entry by Terence Parsons in the Stanford Encyclopedia of Philosophy
- Medieval Theories of the Syllogism entry by Henrik Lagerlund in the Stanford Encyclopedia of Philosophy
- Aristotle's Prior Analytics: the Theory of Categorical Syllogism an annotated bibliography on Aristotle's syllogistic
- Abbreviatio Montana article by Prof. R. J. Kilcullen of Macquarie University on the medieval classification of syllogisms.
- The Figures of the Syllogism is a brief table listing the forms of the syllogism.
- Interactive Syllogistic Machine A web based syllogistic machine for exploring fallacies, figures, and modes of syllogisms.
- Syllogistic Reasoning in Buddhism – Example & Worksheet
- Fuzzy Syllogistic System | http://en.wikipedia.org/wiki/Syllogism | 13 |
16 | Please Read How You Can Help Keep the Encyclopedia Free
In contemporary political thought, the term ‘civil rights’ is indissolubly linked to the struggle for equality of American blacks during the 1950s and 60s. The aim of that struggle was to secure the status of equal citizenship in a liberal democratic state. Civil rights are the basic legal rights a person must possess in order to have such a status. They are the rights that constitute free and equal citizenship and include personal, political, and economic rights. No contemporary thinker of significance holds that such rights can be legitimately denied to a person on the basis of race, color, sex, religion, national origin, or disability. Antidiscrimination principles are thus a common ground in contemporary political discussion. However, there is much disagreement in the scholarly literature over the basis and scope of these principles and the ways in which they ought to be implemented in law and policy. In addition, debate exists over the legitimacy of including sexual orientation among the other categories traditionally protected by civil rights law, and there is an emerging literature examining issues of how best to understand discrimination based on disability.
- 1. Rights
- 2. Free and Equal Citizenship
- 3. Discrimination
- 4. Sexual Orientation
- 5. Disability
- 6. Legal Cases and Statutes
- Academic Tools
- Other Internet Resources
- Related Entries
Until the middle of the 20th century, civil rights were usually distinguished from ‘political rights’. The former included the rights to own property, make and enforce contracts, receive due process of law, and worship one's religion. Civil rights also covered freedom of speech and the press (Amar 1998: 216–17). But they did not include the right to hold public office, vote, or to testify in court. The latter were political rights, reserved to adult males. Accordingly, the woman's emancipation movement of the 19th century, which aimed at full sex equality under the law, pressed for equal “civil and political equality” (Taylor 1851/1984: 397 emphasis added)
The civil-political distinction was conceptually and morally unstable insofar as it was used to sort citizens into different categories. It was part of an ideology that classified women as citizens entitled to certain rights but not to the full panoply to which men were entitled. As that ideology broke down, the civil-political distinction began to unravel. The idea that a certain segment of the adult citizenry could legitimately possess one bundle of rights, while another segment would have to make do with an inferior bundle, became increasingly implausible. In the end, the civil-political distinction could not survive the cogency of the principle that all citizens of a liberal democracy were entitled, in Rawls's words, to “a fully adequate scheme of equal basic liberties” (2001: 42).
It may be possible to retain the distinction strictly as one for sorting rights, rather than sorting citizens (Marshall, 1965; Waldron 1993). But it is difficult to give a convincing account of the principles by which this sorting is done. It seems neater and cleaner simply to think of civil rights as the general category of basic rights needed for free and equal citizenship. Yet, it remains a matter of contention which claims are properly conceived as belonging to the category of civil rights (Wellman, 1999). Analysts have distinguished among “three generations” of civil rights claims and have argued over which claims ought to be treated as true matters of civil rights.
The claims for which the American civil rights movement initially fought belong to the first generation of civil rights claims. Those claims included the pre-20th century set of civil rights — such as the rights to receive due process and to make and enforce contracts — but covered political rights as well. However, many thinkers and activists argued that these first-generation claims were too narrow to define the scope of free and equal citizenship. They contended that such citizenship could be realized only by honoring an additional set of claims, including rights to food, shelter, medical care, and employment. This second generation of economic “welfare rights,” the argument went, helped to ensure that the political, economic, and legal rights belonging to the first generation could be made effective in protecting the vital interests of citizens and were not simply paper guarantees.
Yet, some scholars have argued that these second-generation rights should not be subsumed under the category of civil rights. Thus, Cranston writes, “The traditional ‘political and civil rights’ can…be readily secured by legislation. Since the rights are for the most part rights against government interference…the legislation needed had to do no more than restrain the executive's own arm. This is no longer the case when we turn to the ‘right to work’, the ‘right to social security’ and so forth” (1967: 50–51).
However, Cranston fails to recognize that such first-generation rights as due process and the right to vote also require substantial government action and the investment of considerable public resources. Holmes and Sunstein (1999) have made the case that all of the first-generation civil rights require government to do more than simply “restrain the executive's own arm.” It seems problematic to think that a significant distinction can be drawn between first and second-generation rights on the ground that the former, but not the latter, simply require that government refrain from interfering with the actions of persons. Moreover, even if some viable distinction could be drawn along those lines, it would not follow that second-generation rights should be excluded from the category of civil rights. The reason is that the relevant standard for inclusion as a civil right is whether a claim is part of the package of rights constitutive of free and equal citizenship. There is no reason to think that only those claims that can be “readily secured by legislation” belong to that package. And the increasingly dominant view is that welfare rights are essential to adequately satisfying the conditions of free and equal citizenship (Marshall 1965; Waldron 1993; Sunstein 2001).
In the United States, however, the law does not treat issues of economic well-being per se as civil rights matters. Only insofar as economic inequality or deprivation is linked to race, gender or some other traditional category of antidiscrimination law is it considered to be a question of civil rights. In legal terms, poverty is not a “suspect classification.” On the other hand, welfare rights are protected as a matter of constitutional principle in other democracies. For example, section 75 of the Danish Constitution provides that “any person unable to support himself or his dependents shall, where no other person is responsible for his or their maintenance, be entitled to receive public assistance.” And the International Covenant on Economic, Social, and Cultural Rights (see Other Internet Resources) provides that the state parties to the agreement “recognize the right of everyone to an adequate standard of living for himself and his family, including adequate food, clothing and housing, and to the continuous improvement of living conditions.”
A third generation of claims has received considerable attention in recent years, what may be broadly termed “rights of cultural membership.” These include language rights for members of cultural minorities and the rights of indigenous peoples to preserve their cultural institutions and practices and to exercise some measure of political autonomy. There is some overlap with the first-generation rights, such as that of religious liberty, but rights of cultural membership are broader and more controversial.
Article 27 of the International Covenant on Civil and Political Rights (see Other Internet Resources) declares that third-generation rights ought to be protected:
In those States in which ethnic, religious or linguistic minorities exist, persons belonging to such minorities shall not be denied the right, in community with the other members of their group, to enjoy their own culture, to profess and practice their own religion, or to use their own language.
Similarly, the Canadian Charter of Rights and Freedoms protects the language rights of minorities and section 27 provides that “This Charter shall be interpreted in a manner consistent with the preservation and enhancement of the multicultural heritage of Canadians.” In the United States, there is no analogous protection of language rights or multiculturalism, although constitutional doctrine does recognize native Indian tribes as “domestic dependent nations” with some attributes of political self-rule, such as sovereign immunity (Oklahoma Tax Commission v. Citizen Band Potawatomi Indian Tribe).
There is substantial philosophical controversy over the legitimacy and scope of rights of cultural membership. Kymlicka has argued that the liberal commitment to protect the equal rights of individuals requires society to protect such rights, suitably defined (1989; 1994; 1995). He distinguishes among three sorts of rights that have been claimed as part of this third generation by various groups whose culture differs from the dominant culture of a country: (1) rights of self-government, involving a claim to a degree of political autonomy to be exercised through the minority culture's own of institutions, (2) polyethnic rights, involving special claims by members of the minority culture to assist in their integration into the larger society, and (3) representational rights, involving a special claim of the minority culture to have its members serve in legislatures and other political bodies (1995: 27–33). Kymlicka argues that these three sorts of group rights can, in principle, be justified for those populations that he designates as “national minorities,” such as native Americans in the United States and the Québécois and Aboriginals in Canada. A national minority is “an intergenerational community, more or less institutionally complete, occupying a given territory or homeland, [and] sharing a distinct language and history”(18). Kymlicka contends that “granting special representational rights, land claims, or language rights to a [national] minority … can be seen as putting … [it] on a more equal footing [with the majority], by reducing the extent to which the smaller group is vulnerable to the larger” (36–37). Such special rights do not involve granting to the national minority the authority to take away the civil rights of its members. Rather, the rights are “external protections,” providing the group with powers and immunities with which it can protect its culture against the potentially harmful decisions of the broader society (35).
In contrast to national minorities, immigrants who have left their original cultures are entitled only to a much more limited set of group rights, according to Kymlicka. These “polyethnic rights” are claims to have certain adjustments or accommodations made in the prevailing laws and regulations so as give individuals access to mainstream institutions and practices. Thus Kymlicka thinks that Orthodox Jews in the U.S. Armed Forces should have the legal right to wear a yarmulke while on duty and Canadian Sikhs have a legitimate claim to be exempt from motorcycle helmet laws (31).
Waldron (1995) criticizes Kymlicka for exaggerating the importance for the individual of membership in her particular culture and for underestimating the mutability and interpenetration of cultures. Individual freedom requires some cultural context of choice, but it does not require the preservation of the particular context in which the individual finds herself. Liberal individuals must be free to evaluate their culture and to distance themselves from it.
Kukathas criticizes Kymlicka for implying that the liberal commitment to the protection of individual rights is insufficient to treat the interests of minorities with equal consideration. Kukathas contends that “we need to reassert the importance of individual liberty or individual rights and question the idea that cultural minorities have collective rights” (1995: 230). But the system of uniform legal rules that he endorses would keep the state from intervening even when a minority culture inflicts significant harm on its more vulnerable members, e.g., when cultural norms strongly discourage females from seeking the same educational and career opportunities as males.
Barry (2001) asserts that “there are certain rights against oppression, exploitation, and injury, to which every single human being is entitled to lay claim, and…appeals to cultural diversity and pluralism under no circumstances trump the value of basic liberal rights” (132–33). The legal system should protect those rights by impartially imposing the same rules on all persons, regardless of their cultural or religious membership. Barry allows for a few exceptions, such as the accommodation of a Sikh boy whose turban violated school dress regulations, but thinks that the conditions under which such exceptions will be justified “are rarely satisfied” (2001: 62). Barry's position reflects and elaborates Gitlin's earlier condemnation of views advocating distinctive rights for cultural and ethnic minorities. Gitlin condemned such views on the ground that they represent a “swerve from civil rights, emphasizing a universal condition and universalizable rights, to cultural separatism, emphasizing difference and distinct needs” (1995: 153).
At the other end of the spectrum, Taylor (1994) argues for a form of communitarianism that attaches intrinsic importance to the survival of cultures. In his view, differential treatment under the law for certain practices is sometimes justifiable on the ground that such treatment is important for keeping a culture alive. Taylor goes as far as to claim that cultural survival can sometimes trump basic individual rights, such as freedom of speech. Accordingly, he defends legal restrictions on the use of English in Quebec, invoking the survival of Quebec's French culture.
However, it is unclear why intrinsic value should attach to cultural survival as such. Following John Dewey (1939), Kymlicka (1995) rightly emphasizes that liberty would have little or no value to the individual apart from the life-options and meaningful choices provided by culture. But both thinkers also reasonably contend that human interests are ultimately the interests of individual human beings. In light of that contention, it would seem that a culture that could not gain the uncoerced and undeceived adherence of enough individuals to survive would have no moral claim to its continuation. Legal restrictions on basic liberties that are designed to perpetuate a given culture have the cart before the horse: persons should have their basic liberties protected first, as those protections serve the most important human interests. Only when those interests are protected can we then say that a culture should survive, not because the culture is intrinsically valuable, but rather because it has the uncoerced adherence of a sufficient number of persons.
The treatment of blacks under slavery and Jim Crow presents a history of injustice and cultural annihilation that is similar in some respects to the treatment of Native Americans. However, civil rights principles played a very different role in the struggle of Native Americans against the injustices perpetrated against them by whites.
Civil rights principles demand inclusion of the individuals from a disadvantaged group in the major institutions of society on an equal basis with the individuals who are already treated as full citizens. The principles do not require that the disadvantaged group be given a right to govern its own affairs. A right of political self-determination, in contrast, demands that a group have the freedom to order its affairs at it sees fit, and, to that extent, political self-determination has a separatist aspect, even if something less than complete sovereignty is involved.
The pursuit of civil rights by American blacks overshadowed the pursuit of political self-determination. The fact that American blacks lacked any territory of their own on which they could rule themselves favored the civil rights strategy, although arguments were made that there was sufficient geographical concentration of blacks in certain parts of the South (the so-called “Black Belt”) for the African-Americans there to form their own self-governing nation. Thus, shortly after World War II, Harry Haywood advocated black political self-determination on the ground that the only way to solve “the issue of Negro equality” was through their “full development as a nation” (1948: 143). But there was stronger support among American blacks for a strategy that demanded their inclusion as free and equal citizens in the body politic of the United States. The civil war amendments, and the civil rights laws that accompanied them, promised such inclusion, and, in their struggle to defeat Jim Crow, blacks repeatedly called upon white Americans live up to the promise. Equal civil and political rights for blacks as individuals, and integration into the mainstream institutions of society, rather than separate nationhood, was the goal of most American blacks, as shown by the widespread support among blacks for their civil rights movement.
For America's blacks during the 1950's and 60's, the alternative to the civil rights movement was not the intolerable perpetuation of Jim Crow, but rather a form of black nationalism, the main goal of which was obtaining increased resources from the broader society for black institutions and communities. Black self-government along the lines suggested by Haywood did not seem politically possible to most blacks during the civil rights era, but resources for strengthening black businesses and schools and improving black housing was a quite reasonable demand to make on whites. And so many black nationalists argued that, unless and until black communities and their institutions were strengthened, the promise of racial justice through integration and equal civil rights for individuals would prove hollow (Ture and Hamilton).
Valls has recently developed a “liberal black nationalism” (2010: 479) by adapting Kymlicka's account of group rights and arguing that, because American blacks are kind of national minority victimized by historical injustice at the hands of the white population, “justice demands the support of black institutions and communities by the broader society” (474). After this support is forthcoming, Valls contends, individual blacks will be in position to make a free and fair choice as whether, and to what degree, to participate in black institutions and to live in black communities or to become integrated into racially-mixed areas of society. But Elizabeth Anderson has argued, against black nationalism, that segregation is a “fundamental cause of social inequality and undemocratic practices,” (2010: 2) and “[c]omprehensive racial integration is a necessary condition for a racially just future” (189). Anderson's argument entails that Vall's form of black nationalism is self-defeating: segregation itself works to prevent the white support for black communities and institutions for which such nationalism calls. However, her argument is consistent with Tommie Shelby's “pragmatic nationalism,” which holds that “black solidarity is merely a contingent strategy for creating greater freedom and social equality for blacks, a pragmatic but principled approach to achieving racial justice” (2005: 10). Shelby's form of black nationalism endorses the liberal principles of free and equal citizenship for all individuals, as does Valls's version, but, unlike the latter, Shelby's account does not reject the integrationist strategy advocated by Anderson. At the same time, it is difficult to see how black equality can be achieved without a much greater investment of social resources in black neighborhoods and institutions, and, perhaps, Anderson and Shelby can agree with Valls on the need for such investment
In contrast to the civil rights movement of American blacks, Native Americans sought to mitigate the injustices perpetrated against them mainly by pursuing political self-determination, in the form of tribal self-rule. Even after the brutal tribal removals of the early 19th century and the efforts at the end of that century to destroy tribal control of lands through individual allotments, tribes still retained some territorial basis on which a measure of self-rule was possible. And during the black civil rights movement of the 1950's and 60's, there was tension between Native Americans and blacks due to their different attitudes toward self-determination and civil rights. Some Native Americans looked askance at the desire of blacks for inclusion and integration into white society, and they thought that the desire was hopelessly naïve (Deloria, 1988: 169–70). Such Native Americans were more in tune with radical black nationalists who favored Haywood's call for blacks to govern themselves politically in those jurisdictions where they were concentrated.
In 1968, Congress enacted an Indian Civil Rights Act (ICRA). The act extended the reach of certain individual constitutional rights against government to intratribal affairs. Tribal governments would for the first time be bound by constitutional principles concerning free speech, due process, cruel and unusual punishment, and equal protection, among others. Freedom of religion was omitted from the law as a result of the protests of the Pueblo, whose political arrangements were theocratic, but the law was a major incursion on tribal self-determination, nonetheless (Norgren and Shattuck, 1993: 169).
A married pueblo woman brought suit in federal court, claiming that the tribe's marriage ordinances constituted sex discrimination against her and other women of the tribe, thus violating the ICRA. (Santa Clara Pueblo v. Martinez) The ordinances excluded from tribal membership the children of a Pueblo woman who married outside of the tribe, while the children of men who married outsiders were counted as members. Martinez had initially sought relief in tribal forums, to no avail, before turning to the federal courts. The Supreme Court held that federal courts did not have jurisdiction to hear the case: the substantive provisions of the ICRA did apply to the Pueblo, but the inherent sovereign powers of the tribe meant that the tribal government had exclusive jurisdiction in the case. The ruling has been both questioned and defended by feminist legal scholars (MacKinnon, 1987; Valencia-Weber 2004).
In contrast to the United States, the Canadian Indian Act provides that men and women are to be treated equally when it comes to the band membership of their children (Johnston, 1995: 190). This law and the Santa Clara case raise the general issue of whether and when it is justifiable for a liberal state to impose liberal principles on illiberal (or not fully liberal) political communities that had been involuntary incorporated into the larger state. Addressing this issue, Kymlicka (1995) argues that “there is relatively little scope for legitimate coercive interference” because efforts to impose liberal principles tend to be counterproductive, provoking the charge that they amount to “paternalistic colonialism.” Moreover, “liberal institutions can only really work if liberal beliefs have been internalized.” Kymlicka concludes, then, that liberals on the outside of an illiberal culture should support the efforts of those insiders who seek reform but should generally stop short of coercively imposing liberal principles (1995: 167). At the same time, Kymlicka acknowledges that there are cases in which a liberal state is clearly permitted to impose its laws, citing with approval the decision in a case that involved the application of Canadian law to a tribe that had kidnapped a member and forced him to undergo an initiation ceremony (44).
Applying Kymlicka's general line of thinking might prove contentious in many cases. Consider Santa Clara. His arguments could be used to support the decision in that case: the exercise of jurisdiction might be deemed “paternalistic colonialism.” But one might argue, instead, that jurisdiction is needed to vindicate the basic liberal right of gender equality. However, it does seem that, if a wrong akin to kidnapping or worse is required before federal courts can legitimately step in, then the Santa Clara case falls short of meeting such a requirement. The argument might then shift to whether the requirement imposes an excessively high hurdle for the exercise of federal jurisdiction. Accordingly, Kymlicka's approach might not settle the disagreement over Santa Clara, but it does provide a very reasonable normative framework in terms of which liberal thought can address the difficult issues presented by the case and, more generally, by the problem of extending liberal principles to Native American tribes.
The term ‘Jewish emancipation’ refers to those political processes, occurring from the last decades of the 18th century to the second half of the 19th century, through which the Jews of Western and Central Europe (roughly: Britain, France, Belgium, Holland, Germany, Italy, Switzerland, and Austria-Hungary) attained equal rights under the law. Its first major event was the declaration of equal citizenship for Jews by the French National Assembly (1791). However, Jewish emancipation was not a single process but a collection of them, proceeding in different ways and at different rates in the different parts of the continent. It also involved considerable backsliding at various points.
From the 16th until end of the 18th century, Jews across Europe had been segregated by law into specified rural areas, towns, and city ghettos. They were prohibited from owning land or farming and from joining guilds, which monopolized craft production at the time. Severe restrictions were placed on their travel and special taxes imposed on them. More generally, Jews were regarded by the broader Christian society as an alien people, who had no right to be in Europe at all and could be legitimately expelled by any country that did not desire to tolerate their presence. And Jews were expelled at various times from many European jursidictions, including, England, Spain, Portugal, France, Holland and more than a few German and Italian states and cities. At the end of the 18th century, the German philosopher, Fichte, expressed a common view when he suggested that Jews were a “state within a state” and, addressing the Christians of Europe, asked rhetorically, “If you give [Jews] civic rights in your states, will not your other citizens be completely trod under foot?” (Fichte 1793/1995: 309)
In the areas they inhabited, Jews were permitted to organize themselves into self-governing communities, called kehillot, which had governing councils with the authority to impose and collect taxes and to punish Jews who had violated community norms and religious rules. The councils also had the power of excommunication, which involved prohibiting all members of the community from any interaction with the excommunicated individual (Katz 1961: chaps. IX-XI). And, although their communal autonomy had already begun to weaken in the 17th and 18th centuries (Ettinger 1976: 750), Jewish communities in Europe at the outset of emancipation fit the main features of groups that Kymlicka characterizes as “national minorities” with the right of self-government (1995: 18; see section 1.2 above).
During the period of emancipation, some Jews wanted to have strengthened “external protections” (see section 1.2 above) for their communal autonomy, but the main force of emancipation pushed in a different direction. Jewish emancipation was tied closely to the Enlightenment and the French Revolution, with their commitment to the equality and freedom of human individuals, and the dominant ethical concern of emancipation was not protection for communal autonomy but rather the attainment of equal rights. Still, Arendt argues that Jewish emancipation arose, not only from the political ideal of equality, but also from the European state's need for financial credit, which only Jews were prepared to meet at the time (1951/1976: 11–12). Accordingly, she claims that Jewish emancipation had an “ever-present equivocal meaning” (12): on the one hand, it could be construed as movement for equal rights, but, on the other, it could be seen as the bestowal of privileges on Jews by the ruling powers for services rendered.
This double meaning is reflected in the different understandings that Bruno Bauer and Karl Marx had of Jewish emancipation. Bauer, a German theologian and one of the left-wing “Young Hegelians,” complained that “[t]he emancipation problem has until now been treated in a basically wrong manner by considering it one-sidedly as a Jewish problem … Not only Jews but, we [Christians], also want to be emancipated.” (1843/1958: 63) Bauer regarded emancipation as an effort by Jews to gain special privileges that would allow them to continue living apart from Christian society, following their own comprehensive religious law. In Bauer's eyes, the Jews' idea that they were God's chosen people made a mockery of the suggestion that they could ever regard themselves as equal citizens, to be treated just the same as Christians. On the other hand, from Bauer's perspective, although Christianity was “the perfection of Judaism” (83) insofar as it did not treat any particular nation as the chosen people and offered salvation to all nations, Christians, too, were exclusionary in their own way, by regarding themselves as meriting a privileged political and legal status in contrast with non-Christians. For Bauer, then, the only route to genuinely free and equal citizenship under the law was for Jews to give up Judaism and not become Christians, and for Christian to simply give up Christianity.
In contrast, Marx, notwithstanding his hostility toward Jews and their alleged worship of money, criticized Bauer for thinking that free and equal citizenship depended upon citizens relinquishing their faiths. Marx, who was a fellow young Hegelian at the time, understood Jewish emancipation as part of a more general process in which “the state emancipates itself from religion” by no longer requiring its members to declare which faith they embrace and, instead, establishing a strict separation of church and state, such as was found in the United States (1843/1994: 5–8). Making religion legally and politically irrelevant, not making it disappear, was the aim and accomplishment of a democratic republic. The result, in Marx's eyes, was not that such a republic achieved the highest form of human freedom for its citizens,–for that achievement, religion would have to disappear, but a democratic republic could provide for its citizens the highest form of freedom possible within the context of a society dominated by money and the pursuit of profit.
Jewish emancipation was a success in certain respects, but, ultimately, a catastrophic failure. During the second half of the 19th century, Jews achieved equal rights under the law throughout Western and Central Europe and became integrated into the mainstream institutions of society. Their economic situation had improved dramatically over the course of the century, and they filled professional occupations, such as law and university teaching, which had previously been closed to them (Richarz 1975; Ettinger 1976). However, antisemitism remained a strong and growing social force, and political parties with explicitly antisemitic platforms first began to form and gain support in the latter part of the century (Arendt 1950/1976: 35–50). In response to the continued antisemitism, Theodore Herzl proposed Zionism as the solution, a movement to form an independent Jewish state in Palestine to which European Jewry could and should emigrate. The movement attracted some Jews and was strongly opposed by others. (Medez-Flohr and Reinharz 1995: chap. 10) But the ultimate failure of Jewish emancipation would occur prior to establishment of a Jewish state and would arrive with the rise of the Nazi Party to power. In little more than a decade, Jews went from being equal citizens of the European countries they inhabited to being a stateless people deprived of all legal rights and targeted for physical and cultural annihilation. No other civil rights movement has ever suffered such a devastating reversal, and only the military defeat of Nazi Germany prevented the total destruction of European Jewry.
Civil rights are those rights that constitute free and equal citizenship in a liberal democracy. Such citizenship has two main dimensions, both tied to the idea of autonomy. Accordingly, civil rights are essentially connected to securing the autonomy of the citizen.
To be a free and equal citizen is, in part, to have those legal guarantees that are essential to fully adequate participation in public discussion and decisionmaking. A citizen has a right to an equal voice and an equal vote. In addition, she has the rights needed to protect her “moral independence,” that is, her ability to decide for herself what gives meaning and value to her life and to take responsibility for living in conformity with her values (Dworkin, 1995: 25). Accordingly, equal citizenship has two main dimensions: “public autonomy,” i.e., the individual's freedom to participate in the formation of public opinion and society's collective decisions; and “private autonomy,” i.e., the individual's freedom to decide what way of life is most worth pursuing (Habermas: 1996). The importance of these two dimensions of citizenship stem from what Rawls calls the “two moral powers” of personhood: the capacity for a sense of justice and the capacity for a conception of the good (1995: 164; 2001: 18). A person stands as an equal citizen when society and its political system give equal and due weight to the interest each citizen has in the development and exercise of those capacities.
The idea of equal citizenship can be traced back to Aristotle's political philosophy and his claim that true citizens take turns ruling and being ruled (Politics: 1252a16). In modern society, the idea has been transformed, in part by the development of representative government and its system of elections (Manin: 1997). For modern liberal thought, by contrast, citizenship is no longer a matter of having a direct and equal share in governance, but rather consists in a legal status that confers a certain package of rights that guarantee to an individual a voice, a vote, and a zone of private autonomy. The other crucial differences between modern liberalism and earlier political theories concern the range of human beings who are regarded as having the capacity for citizenship and the scope of private autonomy to which each citizen is entitled as a matter of basic right. Modern liberal theory is more expansive on both counts than its ancient and medieval forerunners.
It is true that racist and sexist assumptions plagued liberal theory well into the twentieth-century. However, two crucial liberal ideas have made possible an internal critique of racism, sexism, and other illegitimate forms of hierarchy. The first is that society is constructed by humans, a product of human will, and not some preordained natural or God-given order. The second is that social arrangements need to be justified before the court of reason to each individual who lives under them and who is capable of reasoning. The conjunction of these ideas made possible an egalitarianism that was not available to ancient and medieval political thought, although this liberal egalitarianism emerged slowly out of the racist and sexist presuppositions that infused much liberal thinking until recent decades.
Many contemporary theorists have argued that taking liberal egalitarianism to its logical conclusion requires the liberal state to pursue a program of deliberately reconstructing informal social norms and cultural meanings. They contend that social stigma and denigration still operate powerfully to deny equal citizenship to groups such as blacks, women, and gays. Accordingly, Kernohan has argued that “the egalitarian liberal state should play an activist role in cultural reform” (1998: xi), and Koppelman has taken a similar position: “the antidiscrimination project seeks to reconstruct social reality to eliminate or marginalize the shared meanings, practices and institutions that unjustifiably single out certain groups of citizens for stigma and disadvantage” (1996: 8). This position is deeply at odds with at least some of the ideas that lie behind the advocacy of third-generation civil rights. Those rights ground claims of cultural survival, whether or not a culture's meanings, practices and institutions stigmatize and disadvantage the members of some ascriptively-defined group. The egalitarian proponents of cultural reconstruction can be understood as advocating a different kind of “third-generation” for the civil rights movement: one in which the state, having attacked legal, political and economic barriers to equal citizenship, now takes on cultural obstacles.
A cultural-reconstruction phase of the civil rights movement would run contrary to Kukathas's argument that it is too dangerous to license the state to intervene against cultures that engage in social tyranny (2001). It also raises questions about whether state-supported cultural reconstruction would violate basic liberties, such as freedom of private association. The efforts of New Jersey to apply antidiscrimination law to the Boy Scouts, a group which discriminates against gays, illustrates the potential problems. The Supreme Court invalidated those efforts on grounds of free association (Boys Scouts v. Dale). Nonetheless, it may be necessary to reconceive the scope and limits of some basic liberties if the principle of free and equal citizenship is followed through to its logical conclusions.
In liberal democracies, civil rights claims are typically conceptualized in terms of the idea of discrimination (Brest, 1976). Persons who make such claims assert that they are the victims of discrimination. In order to gain an understanding of current discussion and debate regarding civil rights, it is important to disentangle the various descriptive and normative senses of ‘discrimination’.
In one of its central descriptive senses, ‘discrimination’ means the differential treatment of persons, however justifiable or unjustifiable the treatment may be. In a distinct but still primarily descriptive sense, it means the disadvantageous (or, less commonly, the advantageous) treatment of some persons relative to others. This sense is not purely descriptive in that an evaluative judgment is involved in determining what counts as a disadvantage. But the sense is descriptive insofar as no evaluative judgment is made regarding the justifiability of the disadvantageous treatment.
In addition to its descriptive senses, there are two normative senses of ‘discrimination’. In the first, it means any differential treatment of the individual that is morally objectionable. In the second sense, ‘discrimination’ means the wrongful denial or abridgement of the civil rights of some persons in a context where others enjoy their full set of rights. The two normative senses are distinct because there can be morally objectionable forms of differential treatment that do not involve the wrongful denial or abridgement of civil rights. If I treat one waiter rudely and another nicely, because one is a New York Yankees fan and the other is a Boston Red Sox fan, then I have acted in a morally objectionable way but have not violated anyone's civil rights.
Discrimination that does deny civil rights is a double wrong against its victims. The denial of civil rights is by itself a wrong, whether or not others have such rights. When others do have such rights, the denial of civil rights to persons who are entitled to them involves the additional wrong of unjustified differential treatment. On the other hand, if everyone is denied his civil rights, then the idea of discrimination would be misapplied to the situation. A despot who oppresses everyone equally is not guilty of discrimination in any of its senses. In contrast, discrimination is a kind of wrong that is found in systems that are liberal democratic but imperfectly so: it is the characteristic injustice of liberal democracy.
The first civil rights law, enacted in 1866, embodied the idea of discrimination as wrongful denial of civil rights to some while others enjoyed their full set of rights. It declared that “all persons” in the United States were to have “the same right…to make and enforce contracts…and to the full and equal benefit of all laws…as is enjoyed by white citizens” 42 U.S.C.A. 1981. The premise was that whites enjoyed a fully adequate scheme of civil rights and that everyone else who was entitled to citizenship was to be legally guaranteed that same set of rights.
It is a notable feature of civil rights law that its prohibitions do not protect only citizens. Any person within a given jurisdiction, citizen or not, can claim the protection of the law, at least within certain limits. Thus, noncitizens are protected by fair housing and equal employment statutes, among other antidiscrimination laws. Noncitizens can also claim the legal protections of due process if charged with a crime. Even illegal aliens have limited due process rights if they are within the legal jurisdiction of the country. On the other hand, noncitizens cannot claim under U.S. law that the denial of political rights amounts to wrongful discrimination. Noncitizens can vote in local and regional elections in certain countries (Benhabib, 2006: 46), but the denial of equal political rights would seem to be central to the very status of noncitizen.
The application of much of civil rights law to noncitizens indicates that many of the rights in question are deeper than simply the rights that constitute citizenship. They are genuine human rights to which every person is entitled, whether she is in a location where she has a right to citizenship or not. And civil rights issues are, for that reason, regarded as broader in scope than issues regarding the treatment of citizens.
Antidiscrimination laws typically pick out certain categories such as race and sex for legal protection, define certain spheres such as employment and public accommodations in which discrimination based on the protected categories is prohibited, and establish special government agencies, such as the Equal Employment Opportunity Commission, to assist in the laws' enforcement. There are many questions that can be raised concerning the justifiability of such laws. Some of the central philosophical questions derive from the fact that the laws restrict freedom of association, including the liberty of employers to decide whom they will hire. Some have argued that the liberal commitment to free association requires the rejection of antidiscrimination laws, including those that ban employment discrimination such as the Civil Rights Act of 1964 (Epstein, 1992). Most liberals thinkers reject this view, but any liberal defense of antidiscrimination laws must cite considerations sufficiently strong to override the infringements on freedom of association that the laws involve.
There are two different approaches within liberal thought to the justification of antidiscrimination laws. Both approaches hold that, in certain important areas of life, such as employment opportunities and access to public accommodations, individuals have a moral right to be be legally protected against any disadvantage being imposed upon them on account of their race, sex, or membership in some other socially salient group. However, on one approach, the only genuine form of discrimination involves the action of an agent who aims at disadvantaging an individual on account of the individual's race, sex, etc. Such an action is often called “direct discrimination” (or, in American law, “disparate-treatment” discrimination). In contrast, the second approach holds that there is another form of discrimination from which individuals have a right to be protected against and which does not necessarily involve an agent who aims at disadvantaging them because of their race or sex or other social-group membership. Often called “indirect discrimination” (or, in American law, “disparate-impact” discrimination), this form is said to consist in actions, policies, or systems of rules that have the effect of disproportionately disadvantaging the members of a particular socially-salient group. Thinkers who take this second approach contend that antidiscrimination law should prohibit, not only direct discrimination, but the indirect form as well, while those who take the first approach deny that “indirect” discrimination really counts as discrimination at all. In its interpretation of the U.S. Constitution, the Supreme Court appears to have adopted the first approach (Balkin 2001), but many legal scholars endorse some version of the second in understanding the constitutional guarantee of equality (Karst 1989). (For a more complete examination of the distinction between direct and indirect discrimination and of the question of what makes discrimination a wrong against individuals, see the entry on discrimination.)
Many debates over civil rights issues turn on assumptions about the scope and effects of existing discrimination (i.e., objectionable disadvantageous treatment) against particular groups. For example, some thinkers hold that systemic discrimination based on race and gender is largely a thing of the past in contemporary liberal democracies (at least in economically advanced ones) and that the current situation allows persons to participate in society as free and equal citizens, regardless of race or gender (Thernstom and Thernstrom, 1997; Sommers, 1994). Many others reject that view, arguing that white skin privilege and patriarchy persist and operate to substantially and unjustifiably diminish the life-prospects of nonwhites and women (Bobo, 1997; Smith 1993). These differences drive debates over affirmative action, race-conscious electoral districting, and pornography, among other issues.
Questions about the scope and effects of discrimination are largely but not entirely empirical in character. Such questions concern the degree to which participation in society as a free and equal citizen is hampered by one's race or sex. And addressing that concern presupposes some normative criteria for determining what is needed to possess the status of such a citizen.
Moreover, there are subtle aspects of discrimination that are not captured by thinking strictly in terms of categories such as race, sex, religion, sexual orientation, and so on. Piper analyzes “higher-order” forms of discrimination in which certain traits, such as speaking style, come to be arbitrarily disvalued on account of their association with a disvalued race or sex (2001). Determining the presence and effects of such forms of discrimination in society at large would be a very complicated conceptual and empirical task. Additional complications stem from the fact that different categories of discrimination might intersect in ways that produce distinctive forms of unjust disadvantage. Thus, some thinkers have asserted that the intersection of race and sex creates a form of discrimination against black women which has not been adequately recognized or addressed by judges or liberal legal theorists. (Crenshaw, 1998) And other thinkers have begun to argue that our understanding of discrimination must be expanded beyond the white-black paradigm to include the distinctive ways in which Asian-Americans and other minority groups are subjected to discriminatory attitudes and treatment (Wu, 2002).
Among the most careful empirical studies of discrimination have been those conducted by Ayers (2001). He found evidence of “pervasive discrimination” in several types of markets, including retail car sales, bail-bonding, and kidney-transplantation. Yet, his assessment is that “we still do not know the current ambit of race and gender discrimination in America” (425).
Some civil rights laws in the United States protect persons from discrimination based on sexual orientation, but many people contest the legitimacy of the laws. The state of Colorado went so far as to ratify an amendment to its constitution that would prohibit any jurisdiction within the state from enacting a civil rights law that would protect homosexuals. The amendment was eventually invalidated by the U.S. Supreme Court on the ground that it was the product of simple prejudice and served no legitimate state purpose, thus violating the Equal Protection Clause (Romer v. Evans).
Much of the discussion of “gay rights” involves the question of whether sexual orientation is genetically determined, socially determined, or the product of individual choice. However, it is not clear why the question is relevant. The discussion appears to assume that genetic determination would vindicate the civil rights claims of gays, because sexual orientation would then be like race or sex insofar as it would be biologically fixed and immutable. But it is a mistake to think that racial or sex discrimination is morally objectionable because of the biological fixity or unchosen nature of race and sex. It is objectionable because it expresses ill-will or indifference, and it is unjust because it treats an individual in a morally arbitrary manner and, under current conditions, reinforces social patterns of disadvantage that seriously diminish the life prospects of many persons. The view that sexual orientation is like race or sex in a morally relevant way should focus on the analogous features of discrimination based on sexual orientation.
Wintermute (1995) and Koppelman (1994 and 1997) assert that discrimination based on sexual orientation is not just analogous to sex discrimination but that it is a form of sex discrimination. If it is legally permissible for Jane to have sex with John, then banning Joe's having sex with John would seem to amount to discrimination (disadvantageous treatment) against Joe on grounds of his sex. If Joe were a woman, his having sex with John would be permitted, so he is being treated differently because of his sex. However, Koppelman contends that this formal argument should be supplemented by more substantive ones referring to the systemic patterns of social disadvantage from which gays and lesbians suffer. In fact, one can argue that the treatment of gays and lesbians is an injustice to them as individuals and amounts to a systemic pattern of unjust disadvantage. The individual injustice arises from the arbitrary nature of denying persons valuable life-opportunities, such as employment and marriage, on the basis of their sexual orientation. The systemic injustice arises from the repeated and widespread acts of individual injustice.
The most controversial civil rights issues regarding sexual orientation concern the principle of equal treatment for same-sex and heterosexual couples. Most scholars endorse such a principle (Wardle 1996) and argue that equal treatment requires that same-sex marriages be legalized (Eskridge 1996). Moreover, it is often argued in the literature that a person's choice of sex partner is central to her life and protected under a right of privacy. In Bowers v. Hardwick, the United States Supreme Court rejected this argument, upholding the criminalization of homosexual sodomy. The decision was condemned by legal and political thinkers and was overturned by the Court in Lawrence v. Texas. The Court invoked the right of privacy in declaring the state's criminal ban on sodomy between same-sex partners. Nonetheless, some scholars who argue for the equal legal treatment of same-sex relations contend that privacy-based arguments are inadequate. They point out that one can hold the view that adults have a right to engage in same-sex intimacies even as one contends that such intimacies are morally abominable and ought not to receive any encouragement from government (Sandel 1996: 107; Koppelman 1997: 1646). Such a view would reject equal legal treatment for those in intimate same-sex relationships.
Finnis takes such a view, arguing that same-sex relations are “manifestly unworthy of the human being and immoral” and should not be encouraged by the state, but finding that criminalizing same-sex relations violates rights of individual privacy (1996: 14). Lee and George also find such relations to be morally defective and unworthy of equal treatment by the state (1997), though George (1993) does not think that any sound a priori principle prohibits criminalization.
Finnis, Lee and George argue for their condemnation of same-sex relations on the ground of natural law theory. However, unlike traditional versions of natural law theory, their version does not rest on any explicit theological or metaphysical claims. Rather, it invokes independent principles of practical reasoning that articulate the basic reasons for action. Such reasons are the fundamental goods that action is capable of realizing and, for Finnis, Lee and George, include “marriage, the conjuntio of man and woman” (Finnis 1996: 4). Homosexual conduct, masturbation, and all extra-marital sex aim strictly at “individual gratification” and can be no part of any “common good.” Such actions “harm the character” of those voluntarily choosing them (Lee and George, 1997: 135). In taking the actions, a person becomes a slave to his passions, allowing his reason to be overridden by his raw desire for sensuous pleasure.
On Finnis's account, when consensual sexual conduct is private, government may not outlaw it, but government “can rightly judge that it has a compelling interest in denying that ‘gay lifestyles’ are a valid and humanly acceptable choice and form of life” (1996: 17). And for Finnis, Lee and George, equal treatment of same-sex and heterosexual relations is out of the question due to the morally defective character of same-sex relations.
Macedo responds to Finnis by arguing that “all of the goods that can be shared by sterile heterosexual couples can also be shared by committed homosexual couples” (1996: 39). Macedo points out that Finnis does not condemn sexual intercourse by sterile heterosexual couples. But Finnis replies that there is a relevant difference between homosexual couples and sterile heterosexual ones: the latter but not the former are united “biologically” when they have intercourse. Lee and George make essentially the same point: only heterosexual couples can “truly become one body, one organism” (1997: 150). But Macedo points out that, biologically, it is not the man and woman who unite but the sperm and the egg (1996: 37). It can be added that the “biological unity” argument seems to run contrary to Finnis's claim that his position “does not seek to infer normative conclusions from non-normative (natural-fact) premises” (1997: 16). More importantly, Macedo and Koppelman make the key point that the human good possible through intimate relations is a function of “mutual commitment and stable engagement” (Macedo, 1996: 40) and that same-sex couples can achieve “the precise kind of human good” that is available to heterosexual ones (Koppelman, 1997: 1649; also see Corvino,2005). Accordingly, equal treatment under the law for same-sex couples, including the recognition of same-sex marriage, would remove unjustifiable obstacles faced by same-sex couples to the achievement of that human good.
The issue of same-sex marriage remains hotly contested in the courts and political arena. In response to political efforts in some states to legalize same-sex marriage, the U.S. Congress enacted the Defense of Marriage Act (DOMA), a statute restricting the term ‘marriage’ as it appears in national legislation and administrative policy to the union of a man and a woman. And the voters of California approved Proposition 8, an initiative amending the state's constitution to declare that “[o]nly marriage between a man and a woman is valid or recognized in California.” But, as of the time of this writing, a federal Circuit Court of Appeals has struck down the Proposition, writing, “Proposition 8 serves no purpose, and has no effect, other than to lessen the status and human dignity of gays and lesbians in California, and to officially reclassify their relationships and families as inferior to those of opposite-sex couples. The Constitution simply does not allow for ‘laws of this sort.’” (Perry v Brown, quoting Romer v. Evans). Additionally, two federal district courts have invalidated DOMA on constitutional grounds, and five states and the District of Columbia currently issue marriage licenses to same-sex couples. Internationally, same-sex couples also have the right to marry in Canada, Spain, Portugal, Iceland, Sweden, Norway, South Africa, Mexico and Argentina.
During the 1970's and 80's, persons with disabilities increasingly argued that they were second-class citizens. They organized into a civil rights movement that pressed for legislation that would help secure for them the status of equal citizens. Protection against discrimination based on disability was written into the Canadian Charter of Rights and Freedoms and The Charter of Fundamental Rights of the European Union. The disability rights movement in the U.S. culminated with the passage of the Americans With Disabilities Act of 1990 (ADA). The ADA has served as a model for legislation in countries such as Australia, India and Israel
The traditional model for understanding disability is called the “medical model.” It is reflected in many pre-ADA laws and in some philosophical discussions of disability which treat it as an issue of the just distribution of health care (Daniels, 1987). According to the medical model, a disabled person is one who falls below some baseline level that defines normal human functioning. That level is a natural one, on this view, in that it is determined by biological facts about the human species. Thus, the medical model supposes that the question of who counts as disabled can be answered in a way that is value-free and that abstracts from existing social practices and the physical environment those practices have constructed. It also gives the medical profession a privileged position in determining who is disabled, as the study and treatment of normal and subnormal human functioning is the specialty of that profession.
The consensus among current disability theorists is that the medical model should be rejected. Any determination that a certain level of function is normal for the species will presuppose judgments that do not simply describe biological reality but impose on them some system of evaluation. Moreover, the level of functioning a person can achieve does not depend solely on her own individual abilities: it depends as well on the social practices and the physical environment those practices have shaped.
Disability theorists thus posit an important analogy between the categories of ‘race’ and ‘disability’. As they understand it, neither category refers to any real distinctions in nature. Just as there is variation in skin color, there is variation in acuity of vision, physical strength, ability to walk and run and so on (Amundson, 2000). And just as there is no natural line dividing one “race” from another, there is no natural line dividing those who are functionally “abnormal” from those who are not so.
The rejection of the medical model has led to a “social model,” according to which certain physical or biological properties are turned into dysfunctions by social practices and the socially-constructed physical environment (Francis and Silvers, 2000). For example, lack of mobility for those who are unable to walk is not simply a function of their physical characteristics: it is also a function of building practices that employ stairs instead of ramps and by automotive design practices that require the use of one's legs to drive a car. There is nothing necessary about such practices. Accordingly, the social model conceives of disability as socially-imposed dysfunction.
The social model brings attention to how engineering and design practices can work to the disadvantage of persons with certain physical characteristics. And the idea of dysfunction is certainly a value-laden one. But it seems no more accurate to think that dysfunction is entirely imposed by society than it is to think that it is entirely the product of an individual's physical or mental characteristics. Individual characteristics in the context of the socially-constructed environment determine the level of functioning that a person can achieve (Amundson, 1992). And some individual characteristics would impair a person's functioning under all or almost all practicable alternatives to current social practices. Moreover, despite the fact that “normal human functioning” is a value-laden concept, it does not follow that it is entirely subjective or that reasonable efforts to specify the elements of some morally acceptable level of human functioning are misguided (Nussbaum and Sen, 1993). Indeed, some defensible understanding of what counts as better or worse human functioning would seem to be necessary to determine when some social practice has turned a physical (or mental) characteristic into a significant disadvantage for a person.
In addition, the social model's conception of what it is to be a disabled person seems overbroad. The social practice of requiring students to pass courses in order to receive a degree creates a barrier that some persons cannot surmount. It does not seem that such people are, ipso facto, disabled. Such examples of “exclusionary” social practices could be multiplied indefinitely. Some thinkers may not be troubled by the implication that everyone is disabled in every respect in which she is excluded or otherwise disadvantaged by some social practice. But it is difficult to see how the idea of disability would then be of much use.
The disability rights movement began with the idea that discrimination on the basis of disability was not different in any morally important way from discrimination based on race. The aim of the movement was to enshrine in law the same kind of antidiscrimination principle that protected persons based on their race. But some theorists have questioned how well the analogy holds. They point out that applying the antidiscrimination norm to disability requires taking account of physical or mental differences among people. This seems to be treatment based on a person's physical (or mental) features, apparently the exact opposite of the ideal of “colorblindness” behind the traditional antidiscrimination principle.
Even race-based affirmative action does not really seem to be parallel to antidiscrimination policies that take account of disability. Advocates of affirmative action assert that the social ideal is for persons not to be treated on the basis of their race or color at all. Race-conscious policies are seen as instruments that will move society toward that ideal (Wasserstrom, 2001).
In contrast, policies designed to counter discrimination based on disability are not sensibly understood as temporary measures or steps toward a goal in which people are not treated based on their disabilities. The policies permanently enshrine the idea that in designing buildings or buses or constructing some other aspect of our physical-social environment, we must be responsive to the disabilities people have in order for the disabled to have “fair equality of opportunity” (Rawls, 2001: 43–44). The need for a permanent “accommodation” of persons with disabilities seems to mark an important difference in how the antidiscrimination norm should be understood in the context of disability, as opposed to the context of race.
However, it is important to recognize that, at the level of fundamental principle, the reasons why disability-based discrimination is morally objectionable and even unjust are essentially the same as the reasons why racial discrimination is so. At the individual level, disadvantageous treatment of the disabled is often rooted in ill-will, disregard, and moral arbitrariness. At the systemic level, such treatment creates a social pattern of disadvantage that reduces the disabled to second-class status. In those two respects, the grounds of civil rights law are no different when it comes to the disabled.
Another way in which disability is thought to be fundamentally different from race concerns the special needs that the disabled often have that make life more costly for them. These extra costs would exist even if the socially-constructed physical environment were built to provide the disabled with fair equality of opportunity and their basic civil and political liberties were secured. In order to function effectively, disabled persons may need to buy medications or therapies or other forms of assistance that the able-bodied do not need for their functioning. And there does not seem to be any parallel in matters of race to the special needs of some of those who are disabled. The driving idea of the civil rights movement was that blacks did not have any special needs: all they needed was to have the burdens of racism lifted from them and, once that was accomplished, they would flourish or fail like everyone else in society.
However, Silvers (1998) argues that the parallel between race and disability still holds: all the disabled may claim from society as a matter of justice is that they have fair equality of opportunity and the same basic civil rights as everyone else. Any special needs that the disabled may have do not provide the grounds of any legitimate claims of justice. On the other hand, Kittay (2000) argues that the special needs of the disabled are a matter of basic justice. She focuses on the severely mentally disabled, for whom fair opportunity in the labor markets and political rights in the public sphere will have no significance, and on the families which have the responsibility of caring for the severely disabled. Pogge (2000) also questions Silvers' view, suggesting that it is implausible to deny that justice requires that society provide resources for meeting the needs of the severely disabled. Still, it may be the case that some version of Silvers' approach may be justifiable when it comes to disabled persons who have the capacity “to participate fully in the political and civic institutions of the society and, more broadly, in its public life” (Pogge, 2000: 45). In the case of such persons, the basic civil right to equal citizenship would require that they have the equal opportunity to participate in such institutions, regardless of their disability. Although there may be some aspects of the racial model that cannot be applied to persons with severe forms of mental disability, the principles behind the American civil rights struggles of the 1950's and 60's remain crucial normative resources for understanding and combating forms of unjust discrimination that have only more recently been addressed by philosophers and by society more broadly.
The emergence of the issue of disability rights has posed an important challenge for versions of liberalism inspired by the social contract tradition. One of the putative advantages of such forms of liberalism is that they better reflect strong and widely held intuitions about justice and individual rights than does utilitarianism. As Rawls famously wrote, “Each person possesses an inviolability founded on justice that even the welfare of society as a whole cannot override” (1999: 3). However, several thinkers have argued that Rawls's own theory does not make adequate room for the rights of the disabled.
Social contract theory is commonly divided between two competing versions: contractarianism and contractualism. The former represents principles of justice as principles that would be agreed to by rational and self-interested individuals for the regulation of a society in which they are to cooperate with one another (Gauthier 1986). The principles chosen will, like a typical contract, result from bargaining among the parties in which each party offers to bring something of value to the others (i.e., his potential cooperative efforts and the fruits thereof) on the condition that the others bring something of sufficient value to him. Thus, contractarian justice is justice understood in terms of mutual advantage. In contrast, contractualism represents principles of justice as principles that would be agreed to individuals who are not only advantage-seeking but also “reasonable,” in the sense that they are seeking terms of cooperation that can be justified to all of the parties as “free and equal citizens” (Rawls 1993: 48–54). Contractualist justice is justice understood in terms of mutual respect and reciprocity.
Contractarianism runs into the problem that (some of) the disabled might simply be excluded from the bargaining altogether, because they do not bring anything of sufficient value to the table to make it worthwhile for the parties to bargain with them. Thus, Nussbaum (2006) construes Rawls's theory as (in part) contractarian and criticizes it as exclusionary when it comes to the disabled. But Becker defends the contractarian view of justice in terms of mutual advantage, arguing that it can incorporate a conception of reciprocity sufficiently rich to underwrite principles that truly do justice to the disabled. Stark (2007) and Brighouse (2001) argue that Rawls's theory can be extended or modified to take account of disabled, without repudiating its contractarian core. But Hartley (2009a, 2009b, and 2011) construes Rawls's theory as a fully contractualist one and contends that almost all of the disabled can make a cooperative contribution in some area of social life, even if not in the market economy.
Kittay (1999 and 2001) agrees with the liberal idea that justice must not be sacrificed for other values, but she doubts that any form of liberalism can make adequate room for the claims of justice made on behalf of the severely disabled. In contrast, Silvers and Francis (2005) defend a form of contract theory in which the parties seek to build mutual trust. They argue that the interests of disabled would not be discounted in such a contract.
- Americans With Disabilities Act. 42 U.S.C. §§12101–12213 (1999).
- Bowers v. Hardwick 478 U.S. 186 (1986).
- Boy Scouts v. Dale, No. 99–699 (2000).
- Civil Rights Act of 1866. 42 U.S.C §1981 (1999).
- Civil Rights Act of 1964. 42 U.S.C. §§2000e et seq.
- Defense of Marriage Act 28 U.S.C. §1738c (1999).
- Ex Parte Crow Dog 109 U.S. 556 (1883).
- Indian Civil Rights Act of 1968. 28 U.S.C. §§1301–1303.
- Oklahoma Tax Commission v. Citizen Band, Potawatomi Indian Tribe 498 U.S. 505 (1991).
- Perry v. Brown. February 7, 2012 (9th Circuit, Case No. 10-16696).
- Pregnancy Discrimination Act 42 U.S.C. §2000 (e)(k).
- Romer v. Evans 517 U.S. 620 (1996).
- Santa Clara Pueblo v. Martinez 436 U.S. 49 (1978).
- Amar, Akhil Reed, 1998. The Bill of Rights, New Haven: Yale University Press.
- Amundson, Ronald, 2000. “Biological Normality and the ADA,” in L.P. Francis and A. Silvers (eds.), Americans with Disabilities, New York: Routledge, pp. 102–110.
- –––, 1992. “Disability, Handicap, and the Environment,” Journal of Social Philosophy, 23: 105–19.
- Anderson, Elizabeth, 2010. The Imperative of Integration, Princeton: Princeton University Press.
- Appiah, K. Anthony, 1996. “Race, Culture, Identity: Misunderstood Connections,” in A. Gutmann and K.A. Appiah, Color Conscious, Princeton: Princeton University Press, pp. 30–105.
- Arendt, Hannah, 1951/1976. The Origins of Totalitarianism, New York: Harcourt.
- Ayers, Ian, 2001. Pervasive Discrimination, Chicago: University of Chicago Press, 2001.
- Balkin, Jack (ed.), 2001. What Brown v. Board of Education Should Have Said : The Nation's Top Legal Experts Rewrite America's Landmark Civil Rights Decision, New York: New York University Press.
- Barry, Brian, 2001. Culture and Equality, Cambridge, MA: Harvard University Press.
- Bauer, Bruno, 1843/1958. The Jewish Problem, Helen Lederer, trans. Cincinnati: Hebrew Union College.
- Becker, Lawrence, 2005. “Reciprocity, Justice, and Disability,” Ethics, 116: 9–39.
- Brighouse, Harry, 2001. “Can Justice as Fairness Accommodate the Disabled?,” Social Theory and Practice, 27: 537–60.
- Bobo, Lawrence, 1997. “Lassez-Faire Racism,” in Steven Tuch and Jack Martin, Racial Attitudes in the 1990's, Wesport, CT: Praeger, pp. 15–42
- Boxill, Bernard, 1992. Blacks and Social Justice revised ed. Lanham, MD: Rowman and Littlefield.
- Brest, Paul, 1976. “In Defense of the Antidiscrimination Principle,” Harvard Law Review, 90: 1–54.
- Corvino, John, 2005. “Homosexuality and the PIB Argument,” Ethics, 115: 501–34.
- Cranston, Maurice, 1967. “Human Rights: Real and Supposed,” in D.D. Raphael (ed.), Political Theory and the Rights of Man, Bloomington, IN: Indiana U.P., pp. 43–51
- Crenshaw, Kimberle, 1998. “A Black Feminist Critique of Antidiscrimination Law and Politics,” in D. Kairys (ed.), The Politics of Law, 3rd edition. New York: Basic Books, pp. 356–80.
- Deloria, Jr. Vine, 1988 Custer Died for Your Sins, Norman, OK: University of Oklahoma Press.
- Dewey, John, 1939 Freedom and Culture, New York: Capricorn.
- Dworkin, Ronald, 1995. Freedom's Law, Cambridge, MA: Harvard University Press.
- Epstein, Richard, 1992. Forbidden Grounds : The Case Against Employment Discrimination Laws, Cambridge, MA: Harvard U.P.
- Eskridge, William, 1996. The Case for Same-Sex Marriage. New York: Free Press.
- Eskridge Jr., William and Darren R. Spedale, 2006. Gay Marriage : For Better or For Worse?: What We've Learned from the Evidence, New York: Oxford University Press.
- Ettinger, Shmuel, 1976.“The Modern Period,” in H.H. Ben-Sasson,(ed.),TA History of the Jewish People, Cambridge: Harvard University Press, pp. 727–1096.
- Fichte, Johann Gottlieb, 1793/1995.“A State Within a State,” in P.Mendez-Flohr and J. Reinharz (eds.), The Jew in the Modern World, 2rd edition. New York: Oxford University Press, pp. 309–310.
- Finnis, John, 1997. “The Good of Marriage and the Morality of Sexual Relations,” American Journal of Jurisprudence, 42: 97–134.
- –––, 1996. “Natural Law Theory and Limited Government,” in Robert P. George, ed., Natural Law, Liberalism and Morality, Oxford: Oxford University Press, pp. 1–26
- Francis, Leslie Pickering and Anita Silvers (eds.), 2000. “Introduction,” in Francis and Silvers (eds.), Americans with Disabilities, New York: Routledge.
- Gardner, John, 1998. “On the Ground of Her Sex(uality),” Oxford Journal of Legal Studies, 18: 167–87.
- Gauthier, David, 1986. Morals By Agreement, Oxford: Clarendon Press.
- George, Robert, 1993. Making Men Moral, Oxford: Clarendon Press.
- Gitlin, Todd, 1995. The Twilight of Common Dreams, New York: Henry Holt.
- Habermas, Jurgen, 1996. “Citizenship and National Identity,” in Between Facts and Norms, Cambridge, MA: MIT Press, pp. 491–515.
- Hartley, Christie, 2011. “Disability and Justice,” Philosophy Compass, 6: 120–132.
- –––, 2009a. “An Inclusive Contractualism: Obligations to the Mentally Disabled,” in K. Brownlee and A. Cureton (eds). Disability and Disadvantage, Oxford: Oxford University Press, pp. 138–162.
- –––, 2009b. “Justice for the Disabled: A Contractualist Approach,” Journal of Social Philosophy, 40: 17–36.
- Haywood, Harry, 1948. Negro Liberation, New York: International Publishers.
- Holmes, Stephen and Cass Sunstein, 1999. The Cost of Rights, New York: Norton.
- Johnston, Darlene, 1995. “Native Rights as Collective Rights; A Question of Group Self-Preservation,” in Will Kymlicka (ed.), The Rights of Cultural Minorities, Oxford, UK: Oxford University Press.
- Karst, Kenneth, 1989. Belonging to America: Equal Citizenship and the Constitution, New Haven: Yale U.P.
- Katz, Jacob, 1961 Tradition and Crisis: Jewish Society at the End of the Middle Ages, New York: Schocken Books.
- Kernohan, Andrew, 1998. Liberalism, Equality, and Cultural Oppression, Cambridge: Cambridge U.P.
- Kittay, Eva F., 1999. Love's labor: Essays on Equality, Women, and Dependency, New York: Routledge.
- –––, 2000. “At Home with My Daughter,” in L.P. Francis and A. Silvers, eds., Americans with Diabilities, New York: Routledge: 64–80.
- –––, 2001. “When Caring is Just and Justice is Caring: Justice and Mental Retardation,” Public Culture, 13: 557–579.
- Koppelman, Andrew, 1997. “Three Arguments for Gay Rights,” Michigan Law Review, 95: 1636–67.
- –––, 1996. Antidiscrimination Law and Social Equality, New Haven: Yale University Press.
- –––, 1994. “Why Discrimination Against Lesbians and Gay Men is Sex Discrimination,” New York University Law Review, 69: 197–287.
- Kukathas, Chandran, 1995. “Are There Any Cultural Rights,” in W. Kymlicka (ed.), The Rights of Minority Cultures, New York: Oxford University Press, pp. 228–56.
- –––, “Is Feminism Bad for Multiculturalism?” 2001. Public Affairs Quarterly, 15: 83–97.
- Kymlicka, Will, 1995. Multicultural Citizenship, Oxford: Clarendon Press.
- –––, 1994. “Individual and Community Rights,” in J. Baker (ed.), Group Rights, University of Toronto Press.
- –––, 1989. Liberalism, Community, and Culture, Oxford: Clarendon Press.
- Lee, Patrick and Robert George, 1997. “What Sex Can Be: Self-Alienation, Illusion, or One-Flesh?” American Journal of Jurisprudence, 42: 135–57.
- Macedo, Stephen, 1996. “Sexual Morality and the New Natural Law,” in R.P. George (ed.), Natural Law, Liberalism, and Morality, Oxford: Oxford University Press.
- MacKinnon, Catherine, 1987. Feminism Unmodified, Cambridge, MA: Harvard University Press.
- Marx, Karl, 1843/1994.“On The Jewish Question,” in L.H. Simon (ed.), Karl Marx: Selected Writings, Indianapolis: Hackett, pp. 1–26.
- Manin, Bernard, The Principles of Representative Government, Cambridge: Cambridge University Press.
- Marshall, T.M., 1965. Class, Citizenship, and Social Development, Garden City, NY: Anchor.
- Mendes-Flohr Paul and Jehuda Reinharz (eds.), 1995, The Jew in the Modern World, 2rd edition, New York: Oxford University Press.
- Nussbaum Martha C., 2006. Frontiers of Justice, Cambridge: Harvard University Press.
- Nussbaum, Martha and Amartya Sen (eds.), 1993. The Quality of Life, Oxford: Clarendon Press.
- Piper, Adrian M.S., 2001. “Two Kinds of Discrimination,” reprinted in B. Boxill, ed. Race and Racism, Oxford: Oxford University Press, pp. 193–237.
- Pogge, Thomas, 2000. “Justice for People with Disabilities,” in L.P. Francis and A. Silvers, eds., Americans with Diabilities, New York: Routledge, pp. 34–53.
- Rawls, John, 2001. Justice as Fairness: A Restatement, Cambridge, MA: Harvard University Press.
- –––, 1999 A Theory of Justice, rev. ed. Cambridge, MA: Harvard University Press.
- –––, 1995. “Political Liberalism: Reply to Habermas,” Journal of Philosophy, 92: 132–80.
- –––, 1993 Political Liberalism, expanded ed. New York: Columbia University Press
- Richarz, Monika, 1975. “Jewish Social Mobility in Germany during the Time of Emancipation,” Yearbook of the Leo Baeck Institute, 20: 69–77.
- Shattuck, Petra T. and Jill Norgren, 1993. Partial Justice, Providence, RI: Berg Publishers.
- Shelby, Tommie, 2006. We Who are Dark, Cambridge,MA: Harvard University Press.
- Silvers, Anita, 1998. “Formal Justice,” in A. Silvers, D. Wasserman and M. Mahowald (eds.), Disability, Difference, Discrimination, Lanham, MD: Rowman and Littlefield: 13–145.
- Silvers, Anita and Leslie Pickering Francis, 2005. “Justice Through Trust: Disability and the ‘Outlier Problem’ in Social Contract Theory,” Ethics, 116: 40–76.
- Stark, Cynthia A., 2007. “How to Include the Severely Disabled in a Contractarian Theory of Justice,” Journal of Political Philosophy, 15: 127–45.
- Sunstein, Cass, 2001. Designing Democracy, New York: Oxford University Press.
- Taylor, Charles, 1994. “The Politics of Recognition,” in A. Guttman, ed., Multiculturalism, Princeton: Princeton University Press.
- Taylor, Harriet, 1851/1984. “Enfranchisement of Women,” in J.M. Robson ed., Collected Works of John Stuart Mill, Vol XXI, Toronto: University of Toronto Press, pp. 393–415.
- Thernstrom, Abigail and Stephan Thernstrom, 1997. America in Black and White, New York: Simon and Schuster.
- Valencia-Weber, Gloria, 2004. “Santa Clara Pueblo v. Martinez: Twenty-five Years of Disparate Cultural Visions,” Kansas Journal of Law and Public Policy, 14: 49– 59.
- Valls, Andrew, 2010. “A Liberal Defense of Black Nationalism,” American Political Science Review, 104: 467–481.
- Waldron, Jeremy, 1995. “Minority Cultures and the Cosmopolitan Alternative,” in W. Kymlicka, ed. The Rights of Minority Cultures, New York: Oxford University Press, pp. 93–119.
- –––, 1993. Liberal Rights, Cambridge: Cambridge University Press
- Walzer, Michael, 1983. Spheres of Justice, New York: Basic Books.
- Wardle, Lynn, 1996. “A Critical Analysis of Constitutional Claims for Same-Sex Marriage,” Brigham Young Law Review, 1996: 1–96
- Wasserstrom, Richard, 2001. “Racism and Sexism,” reprinted in B. Boxill, ed., Race and Racism, Oxford: Oxford University Press, pp. 307–43.
- –––, 1976. “Racism, Sexism, and Preferential Treatment,” UCLA Law Review, 3: 581–622.
- Wellman, Carl, 1999. The Proliferation of Rights: Moral Progress or Empty Rhetoric? Boulder, CO: Westview.
- Wintermute, Robert, 1995. Sexual Orientation and Human Rights, New York: Oxford University Press.
- Wu, Frank, 2002. Yellow: Race in America Beyond Black and White, New York: Basic Books.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Civil Rights Division, U.S. Department of Justice.
- The Civil Rights Project, UCLA.
- U.S. Commission on Civil Rights
- Cornell University Legal Information Institute: Civil Rights
- U.S. Equal Employment Opportunity Commission
- European Convention on Human Rights
- Human Rights Quarterly
- International Covenant on Civil and Political Rights
- International Covenant on Economic, Social and Cultural Rights
- International Human Rights Instruments
- National Constitutions
affirmative action | democracy | discrimination | feminist (interventions): philosophy of law | homosexuality | liberalism | representation, political | rights: group | rights: human | social minimum [basic income]
The editors would like to thank Jesse Gero for spotting several typographical errors and formatting errors of other kinds, and for taking the time to bring these to our attention. | http://plato.stanford.edu/entries/civil-rights/ | 13 |
19 | Hash Functions (Hash Algorithms)
A "Hash function" is a complex encryption algorithm used primarily in cryptography, and is like a shortened version of full-scale encryption.
Hash vs Encryption
Encryption is a broad term, while a hash algorithm is just one of the many encryption schemes.
Encryption - the process of converting information from its normal, comprehensible form into an obscured guise, unreadable without special knowledge.
Hash - a special form of encryption often used for passwords, that uses a one-way algorithm that when provided with a variable length unique input (message) will always provide a unique fixed length unique output called hash, or message digest.
a collision is when two different messages result in the same exact hash. Hash algorithms are written to avoid collisions, but some, such as MD5 - have been shown to have collisions.
A Hash Example
Website User Registration and subsequent Login
a user goes to a website and clicks a button that says "New User Registration"
unknown to the user, his browser has downloaded the Hash algorithm as Java code which begins running in the computer memory
when he types in his user ID it is not encrypted - but when he then types in a password (a short, message) - the Hash JAVA routine encrypts it (into a longer, "message digest" - or hash) - so for this example, he types in his password "mypass" but before it is sent to the web server, it is encrypted by the JAVA Hash algorithm running on his machine as a hash: "5yfRRkrhJDbomacm2lsvEdg4GyY="
the web server and stores the hash (not the original message) in a database as "5yfRRkrhJDbomacm2lsvEdg4GyY=" - IMPORTANT: the web host never sees the actual password, but stores only the hash in it's database !!
the nest time the user connects to the site - he types in his ID and password ("mypass") which is converted by the JAVA routine to "5yfRRkrhJDbomacm2lsvEdg4GyY=". The server compares "5yfRRkrhJDbomacm2lsvEdg4GyY=" to the message stored in it's database - it matches and the user is granted access. Since the server only stores the longer, encrypted message - it NEVER has to decrypt anything !!!
NOTE: if he typed a wrong password, such as "mypass1" - an entirely different message would be created and would not match the message on the server's database and he would be blocked.
IMPORTANT - How this protects the system from un-authorized Users logging in : if an individual somehow intercepted the "password" as it was being sent to the server, or somehow got access to the server database - all they would have is the hash (5yfRRkrhJDbomacm2lsvEdg4GyY=), and not the password (mypass). So then they connect to that website, and are prompted for a login and password. They will only get access if they type "mypass" - but all they know is the hash - not the actual password !!! Even if they manage to view the JAVA code and see the exact algorithm that converted the password to the hash - it is very difficult, if not impossible - to reverse the process and find the password from the hash (see Example for full details).
Hash algorithms take a long string (or message) of any length as input and produce a fixed length string as output; not all such are suitable for use in cryptography. The output of is sometimes termed a message digest or a digital fingerprint. Tne term "hash" is derived from the breakfast dish, since it is comprised of a bunch of miced up pieces of food:
*** see also
SHA (Secure Hash Algorithm)
NIST supports five hash algorithms called SHA, for generating a condensed representation of a message (message digest). The five algorithms are SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512.), and they are detailed in FIPS 180-2. When a message of any length < 264 bits (for SHA-1 and SHA-256) or < 2128 bits (for SHA-384 and SHA-512) is input to an algorithm, the result is an output called a message digest. The message digests range in length from 160 to 512 bits, depending on the algorithm.
MD5 (Message-Digest algorithm 5)
*** RFC 1321
MD5 is a widely used message digest algorithm (aka, cryptographic hash function) with a 128-bit hash value. It is not merely a checksum generator, though the term is sometimes imprecisely used. It is one of a series of message digest algorithms designed by Professor Ronald Rivest of MIT. When some analytic work indicated that MD5's predecessor, MD4, was likely to be insecure, MD5 was designed in response, in 1991. This indication was subsequently confirmed when weaknesses were found in MD4 in 1994 (Dobbertin, 1998).
MD5 has been widely used, and was originally thought to be cryptographically secure. However, work in Europe in 1994 uncovered weaknesses which make further use of MD5 questionable. Specifically, it has been shown that it is computationally feasible to generate a collision, that is, two different messages with the same hash. Unlike MD4, it is still thought to be very difficult to produce a message with a given hash. In 2004, a distributed project with the name MD5CRK was initiated to demonstrate that MD5 is insecure by finding a collision. Because of these concerns, many security researchers and practioners recommend that SHA-1 (or another high quality cryptographic hash function) be used instead of MD5.
MD5 hashes (or message digests) are commonly represented as a 32-digit hexadecimal number. A sample looks like this (using characters 0-9, a-f):
The MD5 hash (sometimes called md5sum, for MD5 checksum) of a zero-length string is:
RIPEMD-160 (RACE Integrity Primitives Evaluation Message Digest)
RIPMD-160 a 160-bit message digest algorithm (and cryptographic hash function) developed in Europe by Hans Dobbertin, Antoon Bosselaers and Bart Preneel, and first published in 1996. It is an improved version of RIPEMD, which in turn was based upon the design principles used in MD4, and is similar in both strength and performance to the more popular SHA-1.
There also exist 128, 256 and 320-bit versions of this algorithm, called RIPEMD-128, RIPEMD-256, and RIPEMD-320, respectively. The 128-bit version was intended only as a drop-in replacement for the original RIPEMD, which was also 128-bit, and which had been found to have questionable security. The 256 and 320-bit versions diminish only the chance of accidental collision, and don't have higher levels of security as compared to, respectively, RIPEMD-128 and RIPEMD-160.
RIPEMD-160 was designed in the open academic community, in contrast to the NSA-designed algorithm, SHA-1. On the other hand, RIPEMD-160 is a less popular and correspondingly less well-studied design.
Wikepedia Crypto HASH Links
Previous Example detailed - one-way Hash Encryption of a Password
This scenario is a perfect candidate for
"one-way hash encryption" also known as a message digest, digital
signature, one-way encryption, digital fingerprint, or cryptographic hash. It is
referred to as "one-way" because although you can calculate a message
digest, given some data, you can't figure out what data produced a given message
This is also a collision-free mechanism that guarantees that no two different values will produce the same digest. Another property of this digest is that it is a condensed representation of a message or a data file and as such it has a fixed length.
There are several message-digest algorithms used widely today.
Algorithm Strength MD5 128 bit SHA-1 160 bit
SHA-1 (Secure Hash Algorithm 1) is slower than MD5, but the message digest is larger, which makes it more resistant to brute force attacks. Therefore, it is recommended that Secure Hash Algorithm is preferred to MD5 for all of your digest needs. Note, SHA-1 now has even higher strength brothers, SHA-256, SHA-384, and SHA-512 for 256, 384 and 512-bit digests respectively.
Typical Registration Scenario
Here is a typical flow of how our message digest algorithm can be used to provide one-way password hashing:
1) User registers with some site by submitting the following data:
2) before storing the data, a one-way hash of the password is created: "mypass" is transformed into "5yfRRkrhJDbomacm2lsvEdg4GyY=" .
The data stored in the database ends up looking like this:
username password jsmith 5yfRRkrhJDbomacm2lsvEdg4GyY=
3) When jsmith comes back to this site later and decides to login using his credentials (jsmith/mypass), the password hash is created in memory (session) and is compared to the one stored in the database. Both values are equal to "5yfRRkrhJDbomacm2lsvEdg4GyY=" since the same password value "mypass" was used both times when submitting his credentials. Therefore, his login will be successful.
Note, any other plaintext password value will produce a different sequence of characters. Even using a similar password value ("mypast") with only one-letter difference, results in an entirely different hash: "hXdvNSKB5Ifd6fauhUAQZ4jA7o8=" .
plaintext password encrypted
password mypass 5yfRRkrhJDbomacm2lsvEdg4GyY= mypast hXdvNSKB5Ifd6fauhUAQZ4jA7o8=
As mentioned above, given that strong encryption algorithm such as SHA is used, it is impossible to reverse-engineer the encrypted value from "5yfRRkrhJDbomacm2lsvEdg4GyY=" to "mypass".
Therefore, even if a malicious hacker gets a hold of your password digest, he/she won't be able determine what your password is. | http://www.infocellar.com/networks/Security/hash.htm | 13 |
46 | In graduate school you will need to read critically most of the time so it's important that you understand how to approach a text with a critical eye. Critical reading involves evaluating and judging the accuracy of statements and the soundness of the reasoning that leads to conclusions.
Critical reading raises many questions such as:
What to consider when reading critically?
Authors rarely explicitly state all that they wish to communicate especially when they assume that their "audience" has certain background knowledge, attitudes, and values. Therefore, it is the reader's job to be aware of the implicit messages.
For practice in detecting underlying assumptions see:
What is an "argument"?
People present arguments to persuade others to accept claims.
2. Premise- reasons/evidence to support a claim. Arguments can have 1 or more premises.
3. Conclusion - the claim being defended by the reasons or evidence. (Do not confuse this with the other usage of ‘conclusion' to mean the last part of an essay or presentation).
Therefore, an argument occurs when...
a CLAIM is made and PREMISES are put forward to justify a CONCLUSION as true.
The arrangement for an argument is often (but not always):
Premise 1 + Premise 2 + Premise 3 etc. → THEREFORE + Conclusion
What the difference between an argument and an explanation?
Explanation = claims are offered to make another claim understandable, i.e., to say why or how it is true.
For a practice exercise see:
An indicator word indicates the presence of an argument and helps us determine what role the statement plays in the argument, i.e., either premise or conclusion. Some indicator words come before the premise; others come before the conclusion. Indictor words are NOT part of the content, but serve to signal which statements are premises and which are conclusions. They indicate the direction of the reasons in the argument. Learning these words and their meanings will help you spot an argument more quickly.
For practice with indicator words, see:
Arguments must be valid which means the conclusion follows logically from the reasons given. Depending on the writer's goal, differing degrees of validity are used to persuade the reader to support his/her argument. A critical reader needs to be aware to what extent the author is providing support for his/her argument.
Degrees of support/validity:
Nil. Even if all the given reasons are true, they would provide no justification whatsoever for the conclusion. (aka a faulty conclusion or non sequitur)
Weak. If the given reasons are true, they would provide a small amount of support for the conclusion, but certainly not enough to justify accepting the conclusion as true. In other words, the reasons are logical, but NOT compelling enough to make it ‘a good bet'.
Moderate. Between strong and weak. If the reasons are true, they do not establish the truth of the conclusion, but they make the truth of the conclusion a ‘live possibility' worth further consideration and investigation.
Strong. If the reasons are true, then they make the truth of the conclusion extremely likely, but not totally guaranteed. In other words, you would stake something of great value on the truth of the conclusion.
Deductively valid. If the reasons are true, then there is no possible way in which the conclusion can be false.
Source: Allen, M. (1997). Smart thinking: skills for critical understanding and writing. Melbourne: Oxford University Press.
For practice in distinguishing degrees of support see:
One important aspect of critical reading is our ability to evaluate arguments, i.e., to judge and assess an argument's persuasiveness. If you are persuaded by an argument, you will accept it based on the strengths of the reasons provided.
Arguing a conclusion based on premises is a natural human activity.
In a good argument the ‘arguer' puts forward 3 assertions:
Someone who offers a ‘good' argument is giving you REASONS and EVIDENCE to accept their claim. Therefore, if you look only at the conclusion and accept or reject it without looking at the reasons (premises), you are ignoring the argument.
Adapted: Govier, T. (1992). A practical study of argument. 3rd ed. Belmont, CA: Wadsworth Publishing Co.
For an example of good and bad arguments, see.
i. Terms are clearly defined.
Writers and readers need to agree on what is meant by the key terms. Without agreement on terms, the argument's validity can by questioned.
ii. Information is used fairly.
The information used to support the argument is correct and current. It avoids distorting the facts or being one-sided, i.e., both sides of the argument are represented.
iii. The argument is logical.
Arguments can be biased but NOT fallacious. To determine if an argument is logical: 1) consider the "grounds" on which it was based, i.e., personal knowledge, reliable expert opinion, common knowledge, reliable testimony, common sense; and 2) look closely at the claims to make sure they are not fallacious.
Source: Behrens L. & Rosen L. (2005). Writing and Reading Across the Curriculum. NY: Pearson/Longman.
A logical fallacy is faulty logic used in writing or speaking. There are many types of fallacies. You need to be able to recognize them when you read and avoid using them in your writing.
Use the following checklist of guided questions to assist you in reading more critically: | http://queensu.ca/learningstrategies/grad/reading/module/criticalreading.html | 13 |
18 | 2008/9 Schools Wikipedia Selection. Related subjects: Economics
Economic inequality refers to disparities in the distribution of economic assets and income. The term typically refers to inequality among individuals and groups within a society, but can also refer to inequality among nations. Economic Inequality generally refers to equality of outcome, and is related to the idea of equality of opportunity. It is a contested issue whether economic inequality is a positive or negative phenomenon, both on utilitarian and moral grounds.
Economic inequality has existed in a wide range of societies and historical periods; its nature, cause and importance are open to broad debate. A country's economic structure or system (for example, capitalism or socialism), ongoing or past wars, and differences in individuals' abilities to create wealth are all involved in the creation of economic inequality.
There are various Numerical indexes for measuring economic inequality. Inequality is most often measured using the Gini coefficient, but there are also many other methods. One way to measure inequality is money based. For instance a person may be regarded as poor if their income falls below the line of poverty.
Economic inequality among different individuals or social groups is best measured within a single country. This is because country-specific factors tend to obscure inter-country comparisons of individuals' incomes. A single nation will have more or less inequality depending on the social and economic structure of that country.
Causes of inequality
There are many reasons for economic inequality within societies. These causes are often inter-related, non-linear, and complex. Acknowledged factors that impact economic inequality include the labour market, innate ability, education, race, gender, culture, preference for earning income or enjoying leisure, willingness to take risks, wealth condensation, and development patterns.
The labor market
A major cause of economic inequality within modern market economies is the determination of wages by the market, provided, that this market is a free market ruled only by the law of supply and demand. In this view, inequality is caused by the differences in the supply and demand for different types of work.
A job where there are many willing workers (high supply) but only a small number of positions (low demand) will result in a low wage for that job. This is because competition between workers drives down the wage. An example of this would be low-skill jobs such as dish-washing or customer service. Because of the persistence of unemployment in market economies and the fact that these jobs require very little skill results in a very high supply of willing workers. Competition amongst workers tend to drive down the wage since if any one worker demands a higher wage the employer can simply hire another employee at an equally low wage.
A job where there are few willing workers (low supply) but a large demand for the skills these workers have will results in high wages for that job. This is because competition between employers will drive up the wage. An example of this would be high-skill jobs such as engineers, professional athletes, or capable CEOs. Competition amongst employers tends to drive up wages since if any one employer demands a low wage, the worker can simply quit and easily find a new job at a higher wage.
While the above examples tend to identify skill with high demand and wages, this is not necessarily the case. For example, highly skilled computer programmers in western countries have seen their wages suppressed by competition from computer programmers in Developing Countries who are willing to accept a lower wage.
The final results amongst these supply and demand interactions is a gradation of different wages representing income inequality within society.
Many people believe that there is a correlation between differences in innate ability, such as intelligence, strength, or charisma, and an individual's wealth. Relating these innate abilities back to the labor market suggests that such abilities are in high demand relative to their supply and hence play a large role in increasing the wage of those who have them. Contrariwise, such innate abilities might also affect an individuals ability to operate within society in general, regardless of the labor market.
Various studies have been conducted on the correlation between IQ scores and wealth/income. The book titled " IQ and the Wealth of Nations", written by Dr. Richard Lynn, examines this relationship with limited success; other peer-reviewed research papers have also been criticised harshly. Without further research on the topic, incorporating statistical models that are universally accepted, it is fairly difficult to come towards an objective conclusion regarding any relationship between intelligence and wealth or income.
One important factor in the creation of inequality is variation in individuals' access to education. Education, especially in an area where there is a high demand for workers, creates high wages for those with this education. As a result, those who are unable to afford an education, or choose not to pursue optional education, generally receive much lower wages. Many economists believe that a major reason the world has experienced increasing levels of inequality since the 1980s is an increase in the demand for highly skilled workers in high-tech industries. They believe that this has resulted in an increase in wages for those with an education, but has not increased the wages of those without an education, leading to greater inequality.
Gender, race, and culture
The existence of different genders, races and cultures within a society is also thought to contribute to economic inequality. Some psychologists such as Richard Lynn argue that there are innate group differences in ability that are partially responsible for producing race and gender group differences in wealth (see also race and intelligence, sex and intelligence) though this assertion is highly controversial.
The idea of the gender gap tries to explain differences in income between genders. Culture and religion are thought to play a role in creating inequality by either encouraging or discouraging wealth-acquiring behaviour, and by providing a basis for discrimination. In many countries individuals belonging to certain racial and ethnic minorities are more likely to be poor. Proposed causes include cultural differences amongst different races, an educational achievement gap, and racism.
Simon Kuznets argued that levels of economic inequality are in large part the result of stages of development. Kuznets saw a curve-like relationship between level of income and inequality, now known as Kuznets curve. According to Kuznet, countries with low levels of development have relatively equal distributions of wealth. As a country develops, it acquires more capital, which leads to the owners of this capital having more wealth and income and introducing inequality. Eventually, through various possible redistribution mechanisms such as social welfare programs, more developed countries move back to lower levels of inequality. Kuznets demonstrated this relationship using cross-sectional data. However, more recent testing of this theory with superior panel data has shown it to be very weak.
Wealth condensation is a theoretical process by which, under certain conditions, newly-created wealth concentrates in the possession of already-wealthy individuals or entities. According to this theory, those who already hold wealth have the means to invest in new sources of creating wealth or to otherwise leverage the accumulation of wealth, thus are the beneficiaries of the new wealth. Over time, wealth condensation can significantly contribute to the persistence of inequality within society.
As an example of wealth condensation, truck drivers who own their own trucks often make more money than those who do not, since the owner of a truck can escape the rent charged to drivers by owners (even taking into account maintenance and other costs). Hence, a truck driver who has wealth to begin with can afford to buy his own truck in order to make more money. A truck driver who does not own his own truck makes a lesser wage and is therefore stuck in a Catch-22, unable to buy his own truck to increase his income.
As another example of wealth condensation, savings from the upper-income groups tend to accumulate much faster than saving from the lower-income groups. Upper-income groups can save a significant portion of their incomes. On the other hand, lower-income groups barely make enough to cover their consumptions, hence only capable of saving a fraction of their incomes or even none. Assuming both groups earn the same yield rate on their savings, the return on upper-income groups’ savings are much greater than the lower-income groups’ savings because upper-income groups have a much larger base.
Related to wealth condensation are the effects of intergenerational inequality. The rich tend to provide their offspring with a better education, increasing their chances of achieving a high income. Furthermore, the wealthy often leave their offspring with a hefty inheritance, jump-starting the process of wealth condensation for the next generation. However, it has been contended by some sociologists such as Charles Murray that this has little effect on one's long-term outcome and that innate ability is by far the best determinant of one's lifetime outcome.
Trade liberalisation may shift economic inequality from a global to a domestic scale. When rich countries trade with poor countries, the low-skilled workers in the rich countries may see reduced wages as a result of the competition. Trade economist Paul Krugman estimates that trade liberalisation has had a measurable effect on the rising inequality in the United States. He attributes this trend to increased trade with poor countries and the fragmentation of the means of production, resulting in low skilled jobs becoming more tradeable. However, he concedes that the effect of trade on inequality in America is minor when compared to other causes, such as technological innovation, a view shared by other experts. Lawrence Katz, a Harvard economist, estimates that trade has only accounted for 5-15% of rising income inequality. Some economists, such as Robert Lawrence, dispute any such relationship. In particular, Robert Lawrence argues that technological innovation and automation has meant that low-skilled jobs have been replaced by machines in rich countries, and that rich countries no longer have significant numbers of low skilled manufacturing workers that could be affected by competition from poor countries.
There are many factors that tend to constrain the amount of economic inequality within society. These factors may be divided into two general classes: government sponsored, and market driven. The relative merits and effectiveness of each approach is a subject of heated debate.
Proponents of government sponsored approaches to reducing economic inequality generally believe that economic inequality represents a fundamental injustice, and that it is the right and duty of the government to correct this injustice. Government sponsored approaches to reducing economic inequality include:
- Mass education - to increase the supply of skilled labor and decrease the wage of skilled labour to reduce income inequality;
- Progressive taxation, where the rich are taxed more than the poor - to reduce the amount of income inequality in society.
- Minimum wage legislation - to raise the income of the poorest working group. However, proponents of the free market point out this will cut the least skilled out of the employment market entirely.
- The Nationalization or subsidization of "essential" goods and services such as food, healthcare, education, and housing - to reduce the amount of inequality in society - by providing goods and services that everyone needs cheaply or freely, governments can effectively increase the disposable income of the poorer members of society.
Proponents of free markets point out that these measures usually backfire, as the growth of government would create a privileged class such as the nomenklatura in the Soviet Union who use their position within the government to gain unequal access to resources, thereby reducing economic equality. Others argue that free markets without these measures allow the already privileged to control the political life of a country as it did in Brazil where the country's right wing military dictatorship (1964-1985) allowed the country to become the most economically unequal in South America.
Other proponents of free markets do not generally see economic inequality in a free market as fundamentally unjust. Market-driven reductions in economic inequality are therefore incidental to economic freedom. Nevertheless there are some market forces which work to reduce economic inequality:
- In a market-driven economy, too much economic disparity could generate pressure for its own removal. In an extreme example, if one person owned everything, that person would immediately (in a market economy) have to hire people to maintain his property, and that person's wealth would immediately begin to dissipate. (García-Peñalosa 2006)
- By a concept known as the "decreasing marginal utility of wealth," a wealthy person will tend not to value his last dollar as much as a poor person, since a poor person's dollars are more likely to be spent for essentials. This could tend to move wealth from the rich to the poor. A derogatory term for this is the " trickle down effect."
Effects of inequality
Research has shown a clear link between income inequality and social cohesion. In more equal societies, people are much more likely to trust each other, measures of social capital suggest greater community involvement, and homicide rates are consistently lower.
One of the earliest writers to note the link between economic equality and social cohesion was Alexis de Tocqueville in his Democracy in America. Writing in 1831:
- Among the new objects that attracted my attention during my stay in the United States, none struck me with greater force than the equality of conditions. I easily perceived the enormous influence that this primary fact exercises on the workings of society. It gives a particular direction to the public mind, a particular turn to the laws, new maxims to those who govern, and particular habits to the governed... It creates opinions, gives rise to sentiments, inspires customs, and modifies everything it does not produce... I kept finding that fact before me again and again as a central point to which all of my observations were leading.
In a 2002 paper, Eric Uslaner and Mitchell Brown showed that there is a high correlation between the amount of trust in society and the amount of income equality. They did this by comparing results from the question "would others take advantage of you if they got the chance?" in U.S General Social Survey and others with statistics on income inequality.
Robert Putnam, professor of political science at Harvard, established links between social capital and economic inequality. His most important studies (Putnam, Leonardi, and Nanetti 1993, Putnam 2000) established these links in both the United States and in Italy. On the relationship of inequality and involvement in community he says:
- Community and equality are mutually reinforcing… Social capital and economic inequality moved in tandem through most of the twentieth century. In terms of the distribution of wealth and income, America in the 1950s and 1960s was more egalitarian than it had been in more than a century… [T]hose same decades were also the high point of social connectedness and civic engagement. Record highs in equality and social capital coincided. Conversely, the last third of the twentieth century was a time of growing inequality and eroding social capital… The timing of the two trends is striking: somewhere around 1965-70 America reversed course and started becoming both less just economically and less well connected socially and politically. (Putnam 2000 pp 359)
In addition to affecting levels of trust and civic engagement, inequality in society has also shown to be highly correlated with crime rates. Most studies looking into the relationship between crime and inequality have concentrated on homicides - since homicides are almost identically defined across all nations and jurisdictions. There have been over fifty studies showing tendencies for violence to be more common in societies where income differences are larger. Research has been conducted comparing developed countries with undeveloped countries, as well as studying areas within countries. Daly et al. 2001. found that among U.S States and Canadian Provinces there is a tenfold difference in homicide rates related to inequality. They estimated that about half of all variation in homicide rates can be accounted for by differences in the amount of inequality in each province or state. Fajnzylber et al. (2002) found a similar relationship worldwide. Among comments in academic literature on the relationship between homicides and inequality are:
- The most consistent finding in cross-national research on homicides has been that of a positive association between income inequality and homicides. (Neapolitan 1999 pp 260)
- Economic inequality is positively and significantly related to rates of homicide despite an extensive list of conceptually relevant controls. The fact that this relationship is found with the most recent data and using a different measure of economic inequality from previous research, suggests that the finding is very robust. (Lee and Bankston 1999 pp 50)
Recently, there has been increasing interest from epidemiologists on the subject of economic inequality and its relation to the health of populations. There is a very robust correlation between socioeconomic status and health. This correlation suggests that it is not only the poor who tend to be sick when everyone else is healthy, but that there is a continual gradient, from the top to the bottom of the socio-economic ladder, relating status to health. This phenomenon is often called the " SES Gradient". Lower socioeconomic status has been linked to chronic stress, heart disease, ulcers, type 2 diabetes, rheumatoid arthritis, certain types of cancer, and premature aging.
There is debate regarding the cause of the SES Gradient. A number of researchers (A. Leigh, C. Jencks, A. Clarkwest - see also Russell Sage working papers) see a definite link between economic status and mortality due to the greater economic resources of the wealthy, but they find little correlation due to social status differences.
Other researchers such as Richard Wilkinson, J. Lynch, and G.A. Kaplan have found that socioeconomic status strongly affects health even when controlling for economic resources and access to health care. Most famous for linking social status with health are the Whitehall studies - a series of studies conducted on civil servants in London. The studies found that although all civil servants in England have the same access to health care, there was a strong correlation between social status and health. The studies found that this relationship remained strong even when controlling for health-affecting habits such as exercise, smoking and drinking. Furthermore, it has been noted that no amount of medical attention will help decrease the likelihood of someone getting type 2 diabetes or rheumatoid arthritis - yet both are more common among populations with lower socioeconomic status. Lastly, it has been found that amongst the wealthiest quarter of countries on earth (a set stretching from Luxembourg to Slovakia) there is no relation between a country's wealth and general population health - suggesting that past a certain level, absolute levels of wealth have little impact on population health, but relative levels within a country do.
The concept of psychosocial stress attempts to explain how psychosocial phenomena such as status and social stratification can lead to the many diseases associated with the SES Gradient. Higher levels of economic inequality tend to intensify social hierarchies and generally degrade the quality of social relations - leading to greater levels of stress and stress-related diseases. Richard Wilkinson found this to be true not only for the poorest members of society, but also for the wealthiest. Economic inequality is bad for everyone's health.
The effects of inequality on health are not limited to human populations. David H. Abbott at the Wisconsin National Primate Research Centre found that among many primate species, less egalitarian social structures correlated with higher levels of stress hormones among socially subordinate individuals.
Utility, economic welfare, and distributive efficiency
Economic inequality is thought to reduce distributive efficiency within society. That is to say, inequality reduces the sum total of personal utility because of the decreasing marginal utility of wealth. For example, a house may provide less utility to a single millionaire as a summer home than it would to a homeless family of five. The marginal utility of wealth is lowest among the richest. In other words, an additional dollar spent by a poor person will go to things providing a great deal of utility to that person, such as basic necessities like food, water, and healthcare; meanwhile, an additional dollar spent by a much richer person will most likely go to things providing relatively less utility to that person, such as luxury items. From this standpoint, for any given amount of wealth in society, a society with more equality will have higher aggregate utility. Some studies (Layard 2003;Blanchard and Oswald 2000, 2003) have found evidence for this theory, noting that in societies where inequality is lower, population-wide satisfaction and happiness tend to be higher.
Economist Arthur Cecil Pigou discussed the impact of inequality in The Economics of Welfare. He wrote:
Nevertheless, it is evident that any transference of income from a relatively rich man to a relatively poor man of similar temperament, since it enables more intense wants, to be satisfied at the expense of less intense wants, must increase the aggregate sum of satisfaction. The old "law of diminishing utility" thus leads securely to the proposition: Any cause which increases the absolute share of real income in the hands of the poor, provided that it does not lead to a contraction in the size of the national dividend from any point of view, will, in general, increase economic welfare.
In addition to the argument based on diminishing marginal utility, Pigou makes a second argument that income generally benefits the rich by making them wealthier than other people, whereas the poor benefit in absolute terms. Pigou writes:
Now the part played by comparative, as distinguished from absolute, income is likely to be small for incomes that only suffice to provide the necessaries and primary comforts of life, but to be large with large incomes. In other words, a larger proportion of the satisfaction yielded by the incomes of rich people comes from their relative, rather than from their absolute, amount. This part of it will not be destroyed if the incomes of all rich people are diminished together. The loss of economic welfare suffered by the rich when command over resources is transferred from them to the poor will, therefore, be substantially smaller relatively to the gain of economic welfare to the poor than a consideration of the law of diminishing utility taken by itself suggests. -- Arthur Cecil Pigou in The Economics of Welfare
Schmidtz (2006) argues that maximizing the sum of individual utilities does not necessarily imply that the maximum social utility is achieved. For example:
A society that takes Joe Rich’s second unit [of corn] is taking that unit away from someone who . . . has nothing better to do than plant it and giving it to someone who . . . does have something better to do with it. That sounds good, but in the process, the society takes seed corn out of production and diverts it to food, thereby cannibalizing itself
Many people accept inequality as a given, and argue that the prospect of greater material wealth provides incentives for competition and innovation within an economy.
Some modern economic theories, such as the neoclassical school, have suggested that a functioning economy requires a certain level of unemployment. These theories argue that unemployment benefits must be below the wage level to provide an incentive to work, thereby mandating inequality. Hypotheses including socialism and Keynesianism, dispute this positive role of unemployment.
Many economists believe that one of the main reasons that inequality might induce economic incentive is because material wellbeing and conspicuous consumption are related to status. In this view, high stratification of income (high inequality) creates high amounts of social stratification, leading to greater competition for status. One of the first writers to note this relationship was Adam Smith who recognized "regard" as one of the major driving forces behind economic activity. From The Theory of Moral Sentiments in 1759:
- [W]hat is the end of avarice and ambition, of the pursuit of wealth, of power, and pre-eminence? Is it to supply the necessities of nature? The wages of the meanest labourer can supply them... [W]hy should those who have been educated in the higher ranks of life, regard it as worse than death, to be reduced to live, even without labour, upon the same simple fare with him, to dwell under the same lowly roof, and to be clothed in the same humble attire? From whence, then, arises that emulation which runs through all the different ranks of men, and what are the advantages which we propose by that great purpose of human life which we call bettering our condition? To be observed, to be attended to, to be taken notice of with sympathy, complacency, and approbation, are all the advantages which we can propose to derive from it. It is the vanity, not the ease, or the pleasure, which interests us ( Theory of Moral Sentiments, Part I, Section III, Chapter II).
Modern sociologists and economists such as Juliet Schor and Robert H. Frank have studied the extent to which economic activity is fueled by the ability of consumption to represent social status. Schor, in The Overspent American, argues that the increasing inequality during the 1980s and 1990s strongly accounts for increasing aspirations of income, increased consumption, decreased savings, and increased debt. In Luxury Fever Robert H. Frank argues that people's satisfaction with their income is much more strongly affected by how it compares with others than its absolute level.
Several recent economists have investigated the relationship between inequality and economic growth using econometrics.
In their study for the World Institute for Development Economics Research, Giovanni Andrea Cornia and Julius Court (2001) reach policy conclusions as to the optimal distribution of income. They conclude that too much equality (below a Gini coefficient of .25) negatively impacts growth due to "incentive traps, free-riding, labour shirking, [and] high supervision costs". They also claim that high levels of inequality (above a Gini coefficient of .40) negatively impacts growth, due to "incentive traps, erosion of social cohesion, social conflicts, [and] uncertain property rights". They advocate for policies which put equality at the low end of this "efficient" range.
Robert Barro wrote a paper arguing that inequality reduces growth in poor countries and promotes growth in rich ones. A number of other researchers have derived conflicting results, some concluding there is a negative effect of inequality on growth and others a positive. Patrizio Pagano used Granger causality, a technique that can determine two way interaction between two variables, to attempt to explain these previous findings. Pagano's research suggested that inequality had a negative effect on growth while growth increased inequality. The two-way interaction largely explains the contradiction in past research.
Perspectives regarding economic inequality
There are various schools of thought regarding economic inequality.
Marxism favors an eventual society where distribution is based on an individual's needs rather than his ability to produce, social class, inheritance, or other such factors. In such a system inequality would be low or non-existent assuming everyone had the same "needs".
Meritocracy favors an eventual society where an individual's success is a direct function of his merit, or contribution. Therefore, economic inequality is beneficial inasmuch as it reflects individual skills and effort, and detrimental inasmuch as it represent inherited or unjustified wealth or opportunities. From a meritocratic point of view, measuring economic equality as one parameter, not distinguishing these two opposite contributing factors, serves no good purpose.
Classical liberals and libertarians generally do not take a stance on wealth inequality, but believe in equality under the law regardless of whether it leads to unequal wealth distribution. Ludwig von Mises (1996) explains:
The liberal champions of equality under the law were fully aware of the fact that men are born unequal and that it is precisely their inequality that generates social cooperation and civilization. Equality under the law was in their opinion not designed to correct the inexorable facts of the universe and to make natural inequality disappear. It was, on the contrary, the device to secure for the whole of mankind the maximum of benefits it can derive from it. Henceforth no man-made institutions should prevent a man from attaining that station in which he can best serve his fellow citizens.
Libertarian Robert Nozick argued that government redistributes wealth by force (usually in the form of taxation), and that the ideal moral society would be one where all individuals are free from force. However, Nozick recognized that some modern economic inequalities were the result of forceful taking of property, and a certain amount of redistribution would be justified to compensate for this force but not because of the inequalities themselves. John Rawls argued in A Theory of Justice that inequalities in the distribution of wealth are only justified when they improve society as a whole, including the poorest members. Rawls does not discuss the full implications of his theory of justice. Some see Rawls's argument as a justification for capitalism since even the poorest members of society theoretically benefit from increased innovations under capitalism; others believe only a strong welfare state can satisfy Rawls's theory of justice.
Classical liberal Milton Friedman believed that if government action is taken in pursuit of economic equality that political freedom would suffer. In a famous quote, he said:
- A society that puts equality before freedom will get neither. A society that puts freedom before equality will get a high degree of both.
Arguments based on social justice
Patrick Diamond and Anthony Giddens (professors of Economics and Sociology, respectively) hold that
pure meritocracy is incoherent because, without redistribution, one generation's successful individuals would become the next generation's embedded caste, hoarding the wealth they had accumulated.
They also state that social justice requires redistribution of high incomes and large concentrations of wealth in a way that spreads it more widely, in order to "recognise the contribution made by all sections of the community to building the nation's wealth." (Patrick Diamond and Anthony Giddens, 27 June 2005, New Statesman)
Claims economic inequality weakens societies
In most western democracies, the desire to eliminate or reduce economic inequality is generally associated with the political left. One practical argument in favour of reduction is the idea that economic inequality reduces social cohesion and increases social unrest, thereby weakening the society.
There is evidence that this is true (see inequity aversion) and it is intuitive, at least for small face-to-face groups of people. Alberto Alesina, Rafael Di Tella, and Robert MacCulloch find that inequality negatively affects happiness in Europe but not in the United States.
Ricardo Nicolás Pérez Truglia in "Can a rise in income inequality improve welfare?" proposed a possible explanation: some goods might not be allocated through standard markets, but through a signaling mechanism. As long as income is associated with positive personal traits (e.g. charisma), in more heterogeneous-in-income societies income not only buys traditional goods (e.g. food, a house), but it also buys non-market goods (e.g. friends, confidence). Thus, endogenous income inequality may explain a rise in social welfare.
It has also been argued that economic inequality invariably translates to political inequality, which further aggravates the problem.
The main disagreement between the western democratic left and right, is basically a disagreement on the importance of each effect, and where the proper balance point should be. Both sides generally agree that the causes of economic inequality based on non-economic differences (race, gender, etc.) should be minimized. There is strong disagreement on how this minimization should be achieved.
Arguments that inequality is not a primary concern
The acceptance of economic inequality is generally associated with the political right. One argument in favor of the acceptance of economic inequality is that, as long as the cause is mainly due to differences in behavior, the inequality provides incentives that push the society towards economically healthy and efficient behaviour. Capitalists see orderly competition and individual initiative as crucial to economic prosperity and accordingly believe that economic freedom is more important than economic equality.
Policy can be considered good if it makes some wealthy people wealthier without making anyone poorer (i.e. a policy which offers a Pareto improvement), even though it increases the total amount of inequality. According to this point of view, discussions of inequality absent any information about absolute levels of wealth are specious, because one population's "poor" may be better off that another's "well-off."
A third argument is that capitalism, especially free market capitalism, results in voluntary transactions among parties. Since the transactions are voluntary, each party at least believes they benefit from the transaction. According to the subjective theory of value, both parties will indeed benefit the transaction (assuming there is no fraud or extortion involved). | http://www.pustakalaya.org/wiki/wp/e/Economic_inequality.htm | 13 |
37 | A quantum computer is a computation device that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), quantum computation uses quantum properties to represent data and perform operations on these data. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers. One example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1981. A quantum computer with spins as quantum bits was also formulated for use as a quantum space-time in 1969.
Although quantum computing is still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits). Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.
Large-scale quantum computers will be able to solve certain problems much faster than any classical computer using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, which run faster than any possible probabilistic classical algorithm. Given sufficient computational resources, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis. However, the computational basis of 500 qubits, for example, would already be too large to be represented on a classical computer because it would require 2500 complex values (2501 bits) to be stored. (For comparison, a terabyte of digital information is only 243 bits.)
A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of these two qubit states; moreover, a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8. In general, a quantum computer with qubits can be in an arbitrary superposition of up to different states simultaneously (this compares to a normal computer that can only be in one of these states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with measurement of all the states, collapsing each qubit into one of the two pure states, so the outcome can be at most classical bits of information.
An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written and , or and ). But in fact any system possessing an observable quantity A, which is conserved under time evolution such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.
Bits vs. qubits
A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.
The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. However, instead of adding to one, the sum of the squares of the coefficient magnitudes, , must equal one. Moreover, the coefficients can have complex values. Since the absolute square of these complex-valued coefficients denote probability amplitudes of given states, the phase between any two coefficients (states) represents a meaningful parameter, which presents a fundamental difference between quantum computing and probabilistic classical computing.
If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = , the probability of measuring 001 = , etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution and we say that the quantum state "collapses" to a classical state as a result of making the measurement.
Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, ..., 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
- where, e.g.,
The computational basis for a single qubit (two dimensions) is and .
Using the eigenvectors of the Pauli-x operator, a single qubit is and .
|Is a universal quantum computer sufficient to efficiently simulate an arbitrary physical system?|
While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, , corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)
Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.
For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch-Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.
Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers (or the related discrete logarithm problem, which can also be solved by Shor's algorithm), including forms of RSA. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
However, other existing cryptographic algorithms do not appear to be broken by these algorithms. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.
Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.
Consider a problem that has these four properties:
- The only way to solve it is to guess answers repeatedly and check them,
- The number of possible answers to check is the same as the number of inputs,
- Every possible answer takes the same amount of time to check, and
- There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. That can be a very large speedup, reducing some problems from years to seconds. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.
There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:
- scalable physically to increase the number of qubits;
- qubits can be initialized to arbitrary values;
- quantum gates faster than decoherence time;
- universal gate set;
- qubits can be read easily.
One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. This effect is irreversible, as it is non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 qubits without error correction. With error correction, the figure would rise to about 107 qubits. Note that computation time is about or about steps and on 1 MHz, about 10 seconds.
A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.
There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
- Quantum gate array (computation decomposed into sequence of few-qubit quantum gates)
- One-way quantum computer (computation decomposed into sequence of one-qubit measurements applied to a highly entangled initial state or cluster state)
- Adiabatic quantum computer or computer based on Quantum annealing (computation decomposed into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contains the solution)
- Topological quantum computer (computation decomposed into the braiding of anyons in a 2D lattice)
The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent to each other in the sense that each can simulate the other with no more than polynomial overhead.
For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
- Superconductor-based quantum computers (including SQUID-based quantum computers) (qubit implemented by the state of small superconducting circuits (Josephson junctions))
- Trapped ion quantum computer (qubit implemented by the internal state of trapped ions)
- Optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice)
- electrically defined or self-assembled quantum dots (e.g. the Loss-DiVincenzo quantum computer or) (qubit given by the spin states of an electron trapped in the quantum dot)
- Quantum dot charge based semiconductor quantum computer (qubit is the position of an electron inside a double quantum dot)
- Nuclear magnetic resonance on molecules in solution (liquid-state NMR) (qubit provided by nuclear spins within the dissolved molecule)
- Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon)
- Electrons-on-helium quantum computers (qubit is the electron spin)
- Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of atoms trapped in and coupled to high-finesse cavities)
- Molecular magnet
- Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerene structures)
- Optics-based quantum computer (Quantum optics) (qubits realized by appropriate states of different modes of the electromagnetic field, e.g.)
- Diamond-based quantum computer (qubit realized by the electronic or nuclear spin of Nitrogen-vacancy centers in diamond)
- Bose–Einstein condensate-based quantum computer
- Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap
- Rare-earth-metal-ion-doped inorganic crystal based quantum computers (qubit realized by the internal electronic state of dopants in optical fibers)
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. But at the same time, there is also a vast amount of flexibility.
In 2001, researchers were able to demonstrate Shor's algorithm to factor the number 15 using a 7-qubit NMR computer.
In 2005, researchers at the University of Michigan built a semiconductor chip that functioned as an ion trap. Such devices, produced by standard lithography techniques, may point the way to scalable quantum computing tools. An improved version was made in 2006.
In 2009, researchers at Yale University created the first rudimentary solid-state quantum processor. The two-qubit superconducting chip was able to run elementary algorithms. Each of the two artificial atoms (or qubits) were made up of a billion aluminum atoms but they acted like a single one that could occupy two different energy states.
Another team, working at the University of Bristol, also created a silicon-based quantum computing chip, based on quantum optics. The team was able to run Shor's algorithm on the chip. Further developments were made in 2010. Springer publishes a journal ("Quantum Information Processing") devoted to the subject.
In April 2011, a team of scientists from Australia and Japan made a breakthrough in quantum teleportation. They successfully transferred a complex set of quantum data with full transmission integrity achieved. Also the qubits being destroyed in one place but instantaneously resurrected in another, without affecting their superpositions.
In 2011, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset. On May 25, 2011 D-Wave announced that Lockheed Martin Corporation entered into an agreement to purchase a D-Wave One system. Lockheed Martin and the University of Southern California (USC) reached an agreement to house the D-Wave One Adiabatic Quantum Computer at the newly formed USC Lockheed Martin Quantum Computing Center, part of USC's Information Sciences Institute campus in Marina del Rey. D-Wave's engineers use an empirical approach when designing their quantum chips, focusing on whether the chips are able to solve particular problems rather than designing based on a thorough understanding of the quantum principles involved. This approach was liked by investors more than by some academic critics, who said that D-Wave had not yet sufficiently demonstrated that they really had a quantum computer. Such criticism softened once D-Wave published a paper in Nature giving details, which critics said proved that the company's chips did have some of the quantum mechanical properties needed for quantum computing.
During the same year, researchers working at the University of Bristol created an all-bulk optics system able to run an iterative version of Shor's algorithm. They successfully managed to factorize 21.
In November 2011 researchers factorized 143 using 4 qubits.
In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a crystal of diamond doped with some manner of impurity, that can easily be scaled up in size and functionality at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used. A system which formed an impulse of microwave radiation of certain duration and the form was developed for maintenance of protection against decoherence. By means of this computer Grover's algorithm for four variants of search has generated the right answer from the first try in 95% of cases.
In September 2012, Australian researchers at the University of New South Wales said the world's first quantum computer was just 5 to 10 years away, after announcing a global breakthrough enabling manufacture of its memory building blocks. A research team led by Australian engineers created the first working "quantum bit" based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern day computers, laptops and phones.
In October 2012, Nobel Prizes were presented to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world - work which may eventually help make quantum computing possible.
In February 2013, a new technique Boson Sampling was reported by two groups using photons in an optical lattice that is not a universal quantum computer but which may be good enough for practical problems. Science Feb 15, 2013
In May 2013, Google Inc announced that it was launching the Quantum Artificial Intelligence Lab, to be hosted by NASA’s Ames Research Center. The lab will house a 512-qubit quantum computer from D-Wave Systems, and the USRA (Universities Space Research Association) will invite researchers from around the world to share time on it. The goal being to study how quantum computing might advance machine learning
Relation to computational complexity theory
The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half. A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.
BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.
The capacity of a quantum computer to accelerate classical algorithms has rigid limits—upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer. A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.
Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis. It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e. there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.
|Wikimedia Commons has media related to: Quantum computer|
- Chemical computer
- DNA computer
- Electronic quantum holography
- List of emerging technologies
- Natural computing
- Normal mode
- Photonic computing
- Post-quantum cryptography
- Quantum bus
- Quantum cognition
- Quantum gate
- Quantum threshold theorem
- Timeline of quantum computing
- Topological quantum computer
- "Quantum Computing with Molecules" article in Scientific American by Neil Gershenfeld and Isaac L. Chuang
- Manin, Yu. I. (1980). Vychislimoe i nevychislimoe [Computable and Noncomputable] (in Russian). Sov.Radio. pp. 13–15. Retrieved 4 March 2013.
- Feynman, R. P. (1982). "Simulating physics with computers". International Journal of Theoretical Physics 21 (6): 467–488. doi:10.1007/BF02650179.
- Deutsch, David (1992-01-06). "Quantum computation". Physics World.
- Finkelstein, David (1969). "Space-Time Structure in High Energy Interactions". In Gudehus, T.; Kaiser, G. Fundamental Interactions at High Energy. New York: Gordon & Breach.
- New qubit control bodes well for future of quantum computing
- Quantum Information Science and Technology Roadmap for a sense of where the research is heading.
- Simon, D.R. (1994). "On the power of quantum computation". Foundations of Computer Science, 1994 Proceedings., 35th Annual Symposium on: 116–123. doi:10.1109/SFCS.1994.365701. ISBN 0-8186-6580-7.
- Nielsen, Michael A.; Chuang, Isaac L. Quantum Computation and Quantum Information. p. 202.
- Nielsen, Michael A.; Chuang, Isaac L. Quantum Computation and Quantum Information. p. 17.
- Waldner, Jean-Baptiste (2007). Nanocomputers and Swarm Intelligence. London: ISTE. p. 157. ISBN 2-7462-1516-0.
- David P. DiVincenzo (1995). "Quantum Computation". Science 270 (5234): 255–261. Bibcode:1995Sci...270..255D. doi:10.1126/science.270.5234.255. (subscription required)
- Arjen K. Lenstra (2000). "Integer Factoring". Designs, Codes and Cryptography 19 (2/3): 101–128. doi:10.1023/A:1008397921377.
- Daniel J. Bernstein, Introduction to Post-Quantum Cryptography. Introduction to Daniel J. Bernstein, Johannes Buchmann, Erik Dahmen (editors). Post-quantum cryptography. Springer, Berlin, 2009. ISBN 978-3-540-88701-0
- See also pqcrypto.org, a bibliography maintained by Daniel J. Bernstein and Tanja Lange on cryptography not known to be broken by quantum computing.
- Robert J. McEliece. "A public-key cryptosystem based on algebraic coding theory." Jet Propulsion Laboratory DSN Progress Report 42–44, 114–116.
- Kobayashi, H.; Gall, F.L. (2006). "Dihedral Hidden Subgroup Problem: A Survey". Information and Media Technologies 1 (1): 178–185.
- Bennett C.H., Bernstein E., Brassard G., Vazirani U., The strengths and weaknesses of quantum computation. SIAM Journal on Computing 26(5): 1510–1523 (1997).
- Quantum Algorithm Zoo – Stephen Jordan's Homepage
- The Father of Quantum Computing By Quinn Norton 02.15.2007, Wired.com
- David P. DiVincenzo, IBM (2000-04-13). "The Physical Implementation of Quantum Computation". arXiv:quant-ph/0002077 [quant-ph].
- M. I. Dyakonov, Université Montpellier (2006-10-14). "Is Fault-Tolerant Quantum Computation Really Possible?". In: Future Trends in Microelectronics. Up the Nano Creek, S. Luryi, J. Xu, and A. Zaslavsky (eds), Wiley , pp.: 4–18. arXiv:quant-ph/0610117.
- Freedman, Michael H.; Kitaev, Alexei; Larsen, Michael J.; Wang, Zhenghan (2003). "Topological quantum computation". Bulletin of the American Mathematical Society 40 (1): 31–38. arXiv:quant-ph/0101025. doi:10.1090/S0273-0979-02-00964-3. MR 1943131.
- Monroe, Don, Anyons: The breakthrough quantum computing needs?, New Scientist, 1 October 2008
- Das, A.; Chakrabarti, B. K. (2008). "Quantum Annealing and Analog Quantum Computation". Rev. Mod. Phys. 80 (3): 1061–1081. doi:10.1103/RevModPhys.80.1061
- Nayak, Chetan; Simon, Steven; Stern, Ady; Das Sarma, Sankar (2008). "Nonabelian Anyons and Quantum Computation". Rev Mod Phys 80 (3): 1083. arXiv:0707.1889. Bibcode:2008RvMP...80.1083N. doi:10.1103/RevModPhys.80.1083.
- Clarke, John; Wilhelm, Frank (June 19, 2008). "Superconducting quantum bits". Nature 453 (7198): 1031–1042. Bibcode:2008Natur.453.1031C. doi:10.1038/nature07128. PMID 18563154.
- William M Kaminsky (2004). "Scalable Superconducting Architecture for Adiabatic Quantum Computation". arXiv:quant-ph/0403090 [quant-ph].
- Imamoğlu, Atac; Awschalom, D. D.; Burkard, Guido; DiVincenzo, D. P.; Loss, D.; Sherwin, M.; Small, A. (1999). "Quantum information processing using quantum dot spins and cavity-QED". Physical Review Letters 83 (20): 4204. Bibcode:1999PhRvL..83.4204I. doi:10.1103/PhysRevLett.83.4204.
- Fedichkin, Leonid; Yanchenko, Maxim; Valiev, Kamil (2000). "Novel coherent quantum bit using spatial quantization levels in semiconductor quantum dot". Quantum Computers and Computing 1: 58–76. arXiv:quant-ph/0006097. Bibcode:2000quant.ph..6097F.
- Knill, G. J.; Laflamme, R.; Milburn, G. J. (2001). "A scheme for efficient quantum computation with linear optics". Nature 409 (6816): 46–52. Bibcode:2001Natur.409...46K. doi:10.1038/35051009. PMID 11343107.
- Nizovtsev, A. P. et al. (October 19, 2004). "A quantum computer based on NV centers in diamond: Optically detected nutations of single electron and nuclear spins". Optics and Spectroscopy 99 (2): 248–260. Bibcode:2005OptSp..99..233N. doi:10.1134/1.2034610.
- Wolfgang Gruener, TG Daily (2007-06-01). "Research indicates diamonds could be key to quantum storage". Retrieved 2007-06-04.
- Neumann, P. et al. (June 6, 2008). "Multipartite Entanglement Among Single Spins in Diamond". Science 320 (5881): 1326–1329. Bibcode:2008Sci...320.1326N. doi:10.1126/science.1157233. PMID 18535240.
- Rene Millman, IT PRO (2007-08-03). "Trapped atoms could advance quantum computing". Retrieved 2007-07-26.
- Ohlsson, N.; Mohan, R. K.; Kröll, S. (January 1, 2002). "Quantum computer hardware based on rare-earth-ion-doped inorganic crystals". Opt. Commun. 201 (1–3): 71–77. Bibcode:2002OptCo.201...71O. doi:10.1016/S0030-4018(01)01666-2.
- Longdell, J. J.; Sellars, M. J.; Manson, N. B. (September 23, 2004). "Demonstration of conditional quantum phase shift between ions in a solid". Phys. Rev. Lett. 93 (13): 130503. arXiv:quant-ph/0404083. Bibcode:2004PhRvL..93m0503L. doi:10.1103/PhysRevLett.93.130503. PMID 15524694.
- Vandersypen, Lieven M. K.; Steffen, Matthias; Breyta, Gregory; Yannoni, Costantino S.; Sherwood, Mark H.; Chuang, Isaac L. (2001). "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance". Nature 414 (6866): 883–7. doi:10.1038/414883a. PMID 11780055.
- Ann Arbor (2005-12-12). "U-M develops scalable and mass-producible quantum computer chip". Retrieved 2006-11-17.
- L. DiCarlo, J. M. Chow, J. M. Gambetta, Lev S. Bishop, B. R. Johnson, D. I. Schuster, J. Majer, A. Blais, L. Frunzio, S. M. Girvin, R. J. Schoelkopf (2009-06-28). "Demonstration of two-qubit algorithms with a superconducting quantum processor". Nature 460 (7252): 240–4. Bibcode:2009Natur.460..240D. doi:10.1038/nature08121. PMID 19561592. Retrieved 2009-07-02.
- "Scientists Create First Electronic Quantum Processor". 2009-07-02. Retrieved 2009-07-02.
- New Scientist (2009-09-04). "Code-breaking quantum algorithm runs on a silicon chip". Retrieved 2009-10-14.
- "New Trends in Quantum Computation".
- Quantum Information Processing. Springer.com. Retrieved on 2011-05-19.
- "University of New South Wales".
- "Engadget, First light wave quantum teleportation achieved, opens door to ultra fast data transmission".
- "Learning to program the D-Wave One". Retrieved 11 May 2011.
- "D-Wave Systems sells its first Quantum Computing System to Lockheed Martin Corporation". 2011-05-25. Retrieved 2011-05-30.
- "Operational Quantum Computing Center Established at USC". 2011-10-29. Retrieved 2011-12-06.
- Quantum annealing with manufactured spins Nature 473, 194–198, 12 May 2011
- The CIA and Jeff Bezos Bet on Quantum Computing Technology Review October 4, 2012 by Tom Simonite
- Enrique Martin Lopez, Anthony Laing, Thomas Lawson, Roberto Alvarez, Xiao-Qi Zhou, Jeremy L. O'Brien (2011). "Implementation of an iterative quantum order finding algorithm". Nature Photonics 6 (11): 773–776. arXiv:1111.4147. doi:10.1038/nphoton.2012.259.
- Quantum computer with Von Neumann architecture
- Quantum Factorization of 143 on a Dipolar-Coupling NMR system
- IBM Says It's 'On the Cusp' of Building a Quantum Computer
- Quantum computer built inside diamond
- "Australian engineers write quantum computer 'qubit' in global breakthrough". The Australian. Retrieved 3 October 2012.
- "Breakthrough in bid to create first quantum computer". The University of New South Wales. Retrieved 3 October 2012.
- Frank, Adam (October 14, 2012). "Cracking the Quantum Safe". New York Times. Retrieved October 14, 2012.
- Overbye, Dennis (October 9, 2012). "A Nobel for Teasing Out the Secret Life of Atoms". New York Times. Retrieved October 14, 2012.
- The Physics arXiv Blog (November 15, 2012). "First Teleportation from One Macroscopic Object to Another". MIT Technology Review. Retrieved November 17, 2012.
- Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-wei (November 13, 2012). "Quantum teleportation between remote atomic-ensemble quantum memories". arXiv. arXiv:1211.2892.
- "Launching the Quantum Artificial Intelligence Lab". Research@Google Blog. Retrieved 16 May 2013.
- Nielsen, p. 42
- Nielsen, p. 41
- Bernstein, Ethan; Vazirani, Umesh (1997). "Quantum Complexity Theory". SIAM Journal on Computing 26 (5): 1411. doi:10.1137/S0097539796300921.
- Ozhigov, Yuri (1999). "Quantum Computers Speed Up Classical with Probability Zero". Chaos Solitons Fractals 10 (10): 1707–1714. arXiv:quant-ph/9803064. Bibcode:1998quant.ph..3064O. doi:10.1016/S0960-0779(98)00226-4.
- Ozhigov, Yuri (1999). "Lower Bounds of Quantum Search for Extreme Point". Proc.Roy.Soc.Lond. A455 (1986): 2165–2172. arXiv:quant-ph/9806001. Bibcode:1999RSPSA.455.2165O. doi:10.1098/rspa.1999.0397.
- Nielsen, p. 126
- Scott Aaronson, NP-complete Problems and Physical Reality, ACM SIGACT News, Vol. 36, No. 1. (March 2005), pp. 30–52, section 7 "Quantum Gravity": "[...] to anyone who wants a test or benchmark for a favorite quantum gravity theory,[author's footnote: That is, one without all the bother of making numerical predictions and comparing them to observation] let me humbly propose the following: can you define Quantum Gravity Polynomial-Time? [...] until we can say what it means for a ‘user’ to specify an ‘input’ and ‘later’ receive an ‘output’—there is no such thing as computation, not even theoretically." (emphasis in original)
- Nielsen, Michael and Chuang, Isaac (2000). Quantum Computation and Quantum Information. Cambridge: Cambridge University Press. ISBN 0-521-63503-9. OCLC 174527496.
- Derek Abbott, Charles R. Doering, Carlton M. Caves, Daniel M. Lidar, Howard E. Brandt, Alexander R. Hamilton, David K. Ferry, Julio Gea-Banacloche, Sergey M. Bezrukov, and Laszlo B. Kish (2003). "Dreams versus Reality: Plenary Debate Session on Quantum Computing". Quantum Information Processing 2 (6): 449–472. arXiv:quant-ph/0310130. doi:10.1023/B:QINP.0000042203.24782.9a. hdl:2027.42/45526.
- David P. DiVincenzo (2000). "The Physical Implementation of Quantum Computation". Experimental Proposals for Quantum Computation. arXiv:quant-ph/0002077
- David P. DiVincenzo (1995). "Quantum Computation". Science 270 (5234): 255–261. Bibcode:1995Sci...270..255D. doi:10.1126/science.270.5234.255. Table 1 lists switching and dephasing times for various systems.
- Richard Feynman (1982). "Simulating physics with computers". International Journal of Theoretical Physics 21 (6–7): 467. Bibcode:1982IJTP...21..467F. doi:10.1007/BF02650179.
- Gregg Jaeger (2006). Quantum Information: An Overview. Berlin: Springer. ISBN 0-387-35725-4. OCLC 255569451.
- Stephanie Frank Singer (2005). Linearity, Symmetry, and Prediction in the Hydrogen Atom. New York: Springer. ISBN 0-387-24637-1. OCLC 253709076.
- Giuliano Benenti (2004). Principles of Quantum Computation and Information Volume 1. New Jersey: World Scientific. ISBN 981-238-830-3. OCLC 179950736.
- Sam Lomonaco Four Lectures on Quantum Computing given at Oxford University in July 2006
- C. Adami, N.J. Cerf. (1998). "Quantum computation with linear optics". arXiv:quant-ph/9806048v1.
- Ian Mitchell, (1998). "Computing Power into the 21st Century: Moore's Law and Beyond".
- Gordon E. Moore (1965). "Cramming more components onto integrated circuits". Electronics Magazine.
- R.W. Keyes, (1988). "Miniaturization of electronics and its limits". "IBM Journal of Research and Development".
- M. A. Nielsen,; E. Knill, ; R. Laflamme,. "Complete Quantum Teleportation By Nuclear Magnetic Resonance".
- Lieven M.K. Vandersypen,; Constantino S. Yannoni, ; Isaac L. Chuang, (2000). Liquid state NMR Quantum Computing.
- Imai Hiroshi,; Hayashi Masahito, (2006). Quantum Computation and Information. Berlin: Springer. ISBN 3-540-33132-8.
- Andre Berthiaume, (1997). "Quantum Computation".
- Daniel R. Simon, (1994). "On the Power of Quantum Computation". Institute of Electrical and Electronic Engineers Computer Society Press.
- "Seminar Post Quantum Cryptology". Chair for communication security at the Ruhr-University Bochum.
- Laura Sanders, (2009). "First programmable quantum computer created".
- "New trends in quantum computation".
- Stanford Encyclopedia of Philosophy: "Quantum Computing" by Amit Hagar.
- Quantiki – Wiki and portal with free-content related to quantum information science.
- Scott Aaronson's blog, which features informative and critical commentary on developments in the field
- Quantum Mechanics and Quantum Computation — Coursera course by Umesh Vazirani
- Quantum computing for the determined — 22 video lectures by Michael Nielsen
- Video Lectures by David Deutsch
- Lectures at the Institut Henri Poincaré (slides and videos)
- Online lecture on An Introduction to Quantum Computing, Edward Gerjuoy (2008)
- Quantum Computing research by Mikko Möttönen at Aalto University (video) | http://en.wikipedia.org/wiki/Quantum_Computation | 13 |
16 | Uniqueness of forcing terms in linear partial differential equations , solving second order linear difference equation , Quadratic by square root method, finding the least common denominator for two rational expressions
Thank you for visiting our site! You landed on this page because you entered a search term similar to this: how to calculate greatest common divisor, here's the result:
|gcd(15, 5) = 5,||gcd(7, 9) = 1,||gcd(12, 9) = 3,||gcd(81, 57) = 3.|
The gcd of two integers can be found by repeated application of the division algorithm, this is known as the Euclidean Algorithm. You repeatedly divide the divisor by the remainder until the remainder is 0. The gcd is the last non-zero remainder in this algorithm. The following example shows the algorithm.
Finding the gcd of 81 and 57 by the Euclidean Algorithm:
57 = 2(24) + 9
24 = 2(9) + 6
9 = 1(6) + 3
6 = 2(3) + 0.
It is well known that if the gcd(a, b) = r then there exist integers p and s so that:
Starting with the next to last line, we have:
The procedure we have followed above is a bit messy because of all the back substitutions we have to make. It is possible to reduce the amount of computation involved in finding p and s by doing some auxillary computations as we go forward in the Euclidean algorithm (and no back substitutions will be necessary). This is known as the extended Euclidean Algorithm.
Before presenting this extended Euclidean algorithm, we shall look at a special application that is the most common usage of the algorithm. We will give a form of the algorithm which only solves this special case, although the general algorithm is not much more difficult.
Consider the problem of setting up the Hill cryptosystem. We were forced to do arithmetic modulo 26, and sometimes we had to find the inverse of a number mod 26. This turned out to be a difficult task (and not always possible). We observed that a number x had an inverse mod 26 (i.e., a number y so that xy = 1 mod 26) if and only if gcd(x, 26) = 1. There is nothing special about 26 here, so let us consider the general case of finding inverses of numbers modulo n. The inverse of x exists if and only if gcd(x, n) = 1. We now know that if this is true, there exist integers p and s so that
The Extended Euclidean Algorithm for finding the inverse of a number mod n.We will number the steps of the Euclidean algorithm starting with step 0. The quotient obtained at step i will be denoted by qi. As we carry out each step of the Euclidean algorithm, we will also calculate an auxillary number, pi. For the first two steps, the value of this number is given: p0 = 0 and p1 = 1. For the remainder of the steps, we recursively calculate pi = pi-2 - pi-1 qi-2 (mod n). Continue this calculation for one step beyond the last step of the Euclidean algorithm.
The algorithm starts by "dividing" n by x. If the last non-zero remainder occurs at step k, then if this remainder is 1, x has an inverse and it is pk+2. (If the remainder is not 1, then x does not have an inverse.) Here is an example:
Find the inverse of 15 mod 26.
|Step 0:||26 = 1(15) + 11||p0 = 0|
|Step 1:||15 = 1(11) + 4||p1 = 1|
|Step 2:||11 = 2(4) + 3||p2 = 0 - 1( 1) mod 26 = 25|
|Step 3:||4 = 1(3) + 1||p3 = 1 - 25( 1) mod 26 = -24 mod 26 = 2|
|Step 4:||3 = 3(1) + 0||p4 = 25 - 2( 2) mod 26 = 21|
|p5 = 2 - 21( 1) mod 26 = -19 mod 26 = 7| | http://www.softmath.com/tutorials2/how-to-calculate-greatest-common-divisor.html | 13 |
15 | In analyzing the architectural history of the Acropolis one should begin by considering the original and natural appearance of the hill. The most important fact to be considered is the position of the crest of the rock of the Acropolis: this crest runs below the northern flank of the Parthenon. To the north of this crest the hill descends with a gentle slope, forming a natural plateau which extends roughly to the present northern wall of the Acropolis. Before 480 B.C. this plateau constituted the essential part of the Acropolis and the Old Temple of Athena was located in the middle of it. To the south of the crest the hill descended with a sharp decline, so that this area could not be used for major constructions. On this side the Pelasgic Wall, which marked the limit of the Acropolis up to 480 B.C., ran rather close to the crest of the hill. The foundations of the Parthenon, towards their southeast corner, cover part of the Pelasgic Wall.
After the defeat of the Persians, when it was decided to leave the ruins of the Old Temple exposed and to replace this temple with another one twice its size, the only space available was toward the south. In order to expand the surface of the Acropolis to the south of the crest, it was necessary to erect a high retaining wall, called the Cimonian Wall, which is today the south wall of the Acropolis, and to fill the space behind it so as to form to the south of the crest a gentle slope equivalent to that which naturally existed to the north of it.
The northern flank of the Parthenon covers the crest of the Acropolis, but the rest of this temple is to the south, in the area of the added artificial filling. Hence, the platform of the Parthenon had to be supported by a huge substructure composed of blocks of poros limestone, averaging 0.50 m in height.1 At the southeast corner of the Parthenon there are 21 layers of this substructure, providing a foundation for the three marble steps. At the southwest corner the number of the layers is about half as many. The number of layers tapers off as one proceeds from south to north and the level of the natural ground rises.
The discovery of the substructure of the Parthenon was the result of the very first digging in the area of the Acropolis, conducted by Ludwig Ross, the German archaeologist appointed Curator of the Antiquities of Athens at the time of the establishment of the Kingdom of Greece.
When it became possible to excavate on the Acropolis, because it had ceased to be used as a military installation, Ross decided that his first task should be that of tracing the remains of the temple destroyed by the Persians. He searched for it under the platform of the Periclean Parthenon. Cutting trenches, he laid bare the foundations in 1835-1836, at a time when the Parthenon itself had not yet been studied in detail. The results were astounding: not only did the Periclean Parthenon rest on the huge substructure just described, but this substructure was built for a different temple.
Ross noticed that the regular outline of the poros substructure does not coincide, either in dimensions or in position, with that of the Periclean Parthenon. The rectangle of the substructure is narrower and longer, and the outline of the lowest step of the Parthenon is displaced to the north and to the west in relation to it. In other words, the two main axes of the platform of the Periclean Parthenon are respectively more to the north and more to the west than the two main axes of the substructure. Quoting Hills figures, the lowest step of the Periclean Parthenon measures 33,690 x 72,320 mm., whereas the substructure measures 31,390 x 76,816 mm. The lowest step of the Periclean Parthenon is displaced to the inside of the substructure by 1,656 mm. on the south side and by 4,258 mm. on the east side, while on the north side the Periclean Parthenon extends well beyond the substructure, resting either on natural ground or on a thin layer of rubble for a distance of 3,956 mm. Only on the west side do the lowest step of the Parthenon and the limit of the substructure run close to each other.
It is obvious that the substructure was not planned for the Periclean Parthenon, because it extends too much to the south and to the east, just in those directions where a lower level of the natural ground had to be reached and hence the costs of construction were greater. Furthermore, the four uppermost layers of the substructure are composed of blocks which have finished faces, which means that they were intended to be exposed above ground level, whereas they were concealed in the Periclean Parthenon by a pavement which extended around the lowest of the three marble steps of the platform.
Ross drew the inescapable conclusion that the substructure had been erected in order to support a temple different from the Periclean Parthenon. Actually two steps of the platform of this temple are to be seen still in place, sandwiched between the substructure and the three steps of the Periclean Parthenon. The distinction is obvious, because the two steps, like the substructure, are of poros limestone, whereas the platform of the Periclean Parthenon is of Pentelic marble. Ross announced triumphantly that the remains of the temple destroyed by the Persians in 480 B.C. had been found. But the issue is much more complex than Ross had assumed it to be. This second conclusion of his was erroneous, although not unreasonable on the basis of the evidence available at the time; there was no reason then to consider the alternative explanation that there had been a change of plans in the construction of the Parthenon. But, whereas Ross expected to be crowned with glory, his conclusions met with a hostile reception. Even then antiquarians (one did not speak of archaeologists yet) were not impressed by arguments of a quantitative nature. The opposition was sharpened by the circumstance that the first formal announcement of Ross discoveries was not made by him but by Penrose in his monumental book on the dimensions of the Parthenon, a book which was seen as a threat by the commonality of ancient scholars for its insistence on the importance of exact measurement.
Following a terminology introduced by Dorpfeld, I shall refer to the Periclean Parthenon as Parthenon III, whereas I shall refer to the proto-Parthenon identified by Ross as Parthenon I.
Although the evidence for Parthenon I is absolutely clear even to a rather casual observer, Ross conclusion was met with opposition, an opposition that has grown steadily more bitter and more irrational in the following one hundred and forty years.
The first volume of Ross report appeared in 1855, when he was no longer in a condition of meeting opposition with equanimity (the second volume appeared posthumously in 1861). In 1843 the King of Greece, Otto I, confronted with the first of the popular revolts against the autocratic tendencies of the new dynasty, tried to appease his subjects by dismissing a number of public officials who, like him, had come from Germany. Ludwig Ross lost the position of Curator of Antiquities and professor of archaeology in a country which by then he considered his own. This event, and an excruciating disease of the spine, caused a mood which culminated in suicide. When he wrote his report on what he had found below the platform of the Periclean Parthenon, Ross was a man desperately bent on winning an argument.
In order to circumvent the opposition, Ross did not insist on the discrepancy of dimensions between the substructure and the Periclean Parthenon and did not quote any figures. Instead, he tried to adduce arguments which would be more acceptable to his colleagues. He thought he could build an argument on the conclusion reached by Colonel William Leake concerning the column drums, 24 in number, imbedded in the north wall of the Acropolis.
When, after the end of Turkish rule, tourists began to flock to Athens, Colonel Leake wrote the first guide to the antiquities of the city. In his guide he had to pay particular attention to the mentioned column drums. The modern city of Athens, like the medieval Athens, or Setine, extends to the north of the Acropolis. For this reason, whoever walks through the streets of Athens sees the column drums every time he lifts his eyes toward the Acropolis. Because, of the play of light and shadows they are clearly visible against the flat background of the north wall of the Acropolis. These drums are the first intriguing sight for the tourist who, having just alighted in Athens, proceeds eagerly to the discovery of the ancient city. Furthermore, these drums will come to his attention every time he spends an evening eating or drinking at the foot of the northern slope of the Acropolis, as visitors usually do. For these reasons Leake had to try to satisfy the curiosity of the readers about these column drums and explain why they are where they are, although they would have been of little interest by themselves.
In order to grasp the problem of the column drums inserted into the north wall of the Acropolis, it is necessary to keep in mind the difference between the north wall and the south wall.
The south wall is a monumental and unitary construction. It runs almost in a straight line, being composed of two segments which meet at a very obtuse angle in front of the southwest corner of the Parthenon. The south wall was constructed for the specific purpose of extending the area of the Acropolis to the south in order to make possible the construction of the Parthenon. For this reason the south wall is much higher than the north wall: both walls reach about the same level at the top, but the south wall begins about twelve meters lower. The course of the south wall is completely different from that of the wall which existed before the Persian Wars. The earlier Pelasgic Wall ran an irregular course closer to the crest of the Acropolis. As I have mentioned earlier, the remains of the Pelasgic Wall are covered in part by the substructure of the Parthenon. The new south wall was called Cimonian because it was built by Cimon with the booty of the victory he scored over the Persians at the battle of Eurymedon (about 468 B.C.). In Plutarch Life of Cimon (ch. xiii) it is stated that Cimons campaign against the Persians in Cyprus (concluded in 449 B.C.) was so successful that not only did it force the Persians to sign a peace treaty with the Athenians, the Peace of Callias, but also provide such a large booty that it permitted the laying of the foundations for the long walls between Athens and the Peiraeus and the erection of the south wall of the Acropolis. Ross simply refused to accept this statement at face value.
The north wall too was built after the Persian Wars. but if follows an irregular course which was essentially that of the pre-Persian walls. It consists of separate sections built at different times. The northernmost section of the north wall is an angular salient which forms a terrace around the Erechtheion and may be dated to the last third of the fifth century B.C. In this section of the north wall there are incorporated fragments of the entablature of the Old Temple destroyed by the Persians in 480 B.C. I have dealt with these fragments of entablature in my discussion of the dimensions of the Old Temple. To the east of the Erechtheion there is another section of the north wall which was built somewhat earlier and incorporates the 24 column drums which are under discussion.
It is necessary to stress the fact which has been neglected by some writers: The column drums are of Pentelic marble, like the Parthenon, whereas the fragments of entablature, which are located more to the west and to the north, are of poros limestone, since they are remains of the Old Temple.
When Leake, British military agent for Greece and the Levant and amateur archaeologist, wrote his Topography of Athens for the benefit of educated travelers, he did not observe this distinction. since Leakes account had a great influence on later opinions of archaeologists, but is steadily misquoted, I cite it here in full:
In the middle of the northern side, the body of the work, though not modern, is evidently less ancient than the Pelasgic fortress. Entire courses of masonry are formed of pieces of Doric columns, which were almost as large as those of the Parthenon, and there are other courses consisting of the component blocks of a Doric entablature of corresponding dimensions. These perhaps are portions of the wall, as it was rebuilt after the Persian war, when (as Thucydides informs us) the ruins of former buildings were much employed for this purpose, the devastations of the Persians having left an abundance of materials of this kind. Thucydides, it is true, alludes more particularly to the peribolus of the Asty, as having been thus hastily constructed, during the intentional delays of the embassy of Themistocles to Sparta; but we can hardly doubt that about this time, the northern wall of the Acropolis was repaired, since it is not to be supposed that when the Cimonian or southern wall was rebuilt twelve years after the retreat of the Persians, any other part of the Acropolis was more in need of reparation...
I have already adverted to some portions of columns, of very ancient date, which probably are inserted in the northern wall of the Acropolis, and were probably placed there at the time of the repairs which followed the Persian war. They belonged apparently to some ruined edifice of large dimensions on the summit of the hill, since it is scarcely to be believed that they were raised to that height from below, for such a purpose. It is not unlikely, therefore, that they were the columns of the more ancient Parthenon, built perhaps in the seventh century, (for their workmanship can hardly be ascribed to an earlier date), at which time the Cecropian hill having long ceased to be a polis, which was its state when the Erechtheium was founded, there was a space on the highest part of the citadel and sacred inclosure, applicable to a large temple. The columns in the northern wall were particularly fluted, and not very different in different in diameter from those of the existing Parthenon.
[footnote:] Having climbed up to the wall with difficulty, I measured one of the flutings and found it 11.3 inches. We may assume that there were twenty flutings, as the exceptions to that number in the Doric order are rare, and twenty is the number, as well in the Parthenon as in the older temples of Corinth, Syracuse, and Aegina. The columns, therefore, in the walls of the Acropolis, were probably more than six feet in diameter.
As I have said, Leake did not notice that the column drums are of Pentelic marble and that the fragments of entablature are of poros limestone. Since both are incorporated into sections of the north wall, he concluded that they belong to the same temple. He stated incorrectly that the column drums and fragments of entablature are of corresponding dimensions. As to dimensions, he limited itself to examining the column drums. He recognized that these column drums are similar to those of the Parthenon. Hence, he tried to establish whether they had the same dimensions as the drums of the columns of the Parthenon. His test was conducted in a cursory manner: he climbed on a ladder and measured with width of a flute, guessing that the flutes are twenty, as in the columns of the Parthenon. He arrived at the result that the flutes have a width of 11.3 English inches. Perhaps there was a mechanical error in Leakes reporting, because when Ross repeated the measurement he arrived at the figure of 11 ¾ English inches. On the basis of his erroneous figure, Leake drew the conclusion that the drums used to belong to a temple different from the Parthenon, but similar to it. This temple, similar to the Parthenon, should be the temple which preceded the Parthenon. If Leake had arrived at the same figure as Ross, he would have said that the drums used to belong to the Parthenon, as they actually did.
Having concluded that the marble column drums did not belong to the Parthenon and having associated them with the limestone fragments of entablature, which actually used to be part of the Old Temple, Leake tried to explain why these architectural elements were incorporated into the north wall. He thought that the explanation is provided by a passage of Thucydides.
The historian Thucydides relates that after the withdrawal of the Persian invaders the Athenians hastened to rebuild the wall of their city. It must be noted that there seems to be a contradiction between Thucydides and Herodotus, since the latter historian in his account of the Persian invasion does not mention the existence of any walls of Athens. Since Greek cities began to be surrounded by city walls at the beginning of the fifth century B.C., it could be that the Athenians had started to build walls around their city before the Persian invasion but had not completed the work by 480 B.C. But whether it was a matter of rebuilding the walls or of building them for the first time, Thucydides states that the Spartans objected to the Athenian plant to surround their city with walls as a defense against a possible new Persian invasion. Thucydides relates an anecdote to the effect that the Athenian leader Themistocles deceived the Spartans by pretending to negotiate the issue, while at the same time urging the Athenians to rush the work of construction so as to confront the Spartans with a fait accompli. In order to prove that this unlikely story is a historical fact, Thucydides adduces the following supporting factual evidence (I.93):
This was the way in which the Athenians put walls around their city in a short time. Even today one can see that the construction was carried through in a hurry: the foundations are made of all sorts of stones, which in some cases were not cut to fit them together, as they were being laid into position in the order by which they were brought to the work. Besides, they took pillars from tombs and unfinished blocks from other buildings to insert them into the walls. Since the circuit of the walls was extended in all directions, there was one reason more why the Athenians should save time by removing indiscriminately anything that was available.
Whether the walls were rebuilt or built for the first time and whether it is true or not that the Athenians were able to erect them without alerting the Spartans, it is quite likely that the Athenians fortified their city in a rush, since a second Persian invasion was expected.. But what is important is that Thucydides speaks of the building of the walls of the city (astu). Leake drew the inference that if the walls of Athens were rebuilt in a hurry, so must have been the walls of the Acropolis: on the basis of this inference he concluded that the north wall of the Acropolis was built soon after 479 B.C., incorporating into it the remains of the temple destroyed by the Persians, for the sake of speeding the work. Leake grants that Thucydides does not mention the walls of the Acropolis, but assumes that the Athenians, in order to protect themselves, rushed to fortify both the Acropolis and the city. Today we know that for the Greeks the fortification of an entire city and the fortification of its Acropolis were exclusive concepts. The fortification of entire cities coincides with the origin of democracy; for the democrats a fortified Acropolis could be a dangerous instrument of tyrannical or oligarchic rule. This is probably the reason why the Spartans, ill-disposed towards democracy, objected to the building of the walls of Athens. Not many years earlier the Spartans had sent troops to Athens to support an oligarchic coup against the newly-born Athenian democracy. At that time the democrats rose up in arms and put a siege around the oligarchs and the Spartans, who had shut themselves up within the walls of the Acropolis. It is almost certain that it was after this experience that the Athenians opened up the walls of the Acropolis by building the Propylaia. In any case it is certain that by the time of the Persian invasion parts of the Propylaia had been built and that the walls of the Acropolis had been partly opened. It is most unlikely that the Athenian democratic party, buoyed by the victories of Salamis and Plataia, would have supported the construction of a fortification wall around the Acropolis.
Leake was not aware of the existence of the above-stated objections against his hypothesis (he does not claim to advance anything more than an hypothesis, since he qualifies his conclusion by the word perhaps; but he was aware of the existence of another objection, which in my opinion is peremptory. The hypothesis that the north wall was rebuilt in a hurry soon after 479 B.C. for the sake of putting the Acropolis in condition of being used as a fortress is contradicted by the fact that the south wall was constructed about twelve years later by Cimon with the spoils of the triumphant campaign he had conducted against the Persians, a campaign which culminated with the victory of Eurymedon (about 468 B.C.). If the Athenians had felt the need to rebuild in a hurry the defenses of the Acropolis, they would not have waited more than twelve years to complete their circuit.
Ross noticed that there is another weakness in Leakes chain of arguments: the north wall does not show any sign of having been put together in a rush, since it is made of regularly-squared poros blocks. Ross intended to keep alive Leakes hypothesis because this would allow him to date the north wall just after 479 B.C., but at the same time he denied its factual foundations. According to logic Ross should have completely discarded Leakes hypothesis, but it was expedient for him to hold on to the contention that the column drums imbedded in the north wall used to belong to the Old Temple. Since he recognized that the insertion of a few column drums in a carefully-constructed wall could not be considered proof of haste, Ross shifted his line of reasoning by declaring that the column drums were incorporated into the north wall not merely because of a shortage of construction materials, but also because of a political reason: to remind of the Persian attack. Even this alternative explanation is unacceptable since, if the north wall was built at any time before 450 B.C., when not only the entire Acropolis but most of the other public buildings of Athens were lying in shambles, there would not have been any need to expose a few column drums in order to remind the public of what the Athenians had suffered. The argument of Leake as modified by Ross is self-contradictory, but it has been repeated for over a century. For instance, the Hachette tourists guide to Greece, which reflects the consensus of the American and French archaeological missions, states:
After the Persian Wars, Themistocles rebuilt the W. and N. ramparts... In spite of the hurried nature of the work, the material gathered haphazard was arranged in a decorative manner; the frieze with its triglyphs of tufa and its metopes of marble surmounted the architrave, the whole crowned with cornices. This arrangement, which can be seen from a distance, was a constant reminder to the Athenians of barbaric vandalism.
As recently as 1963 the Oxford historian Russell Meiggs argued the same way:
The building of the new city walls was accompanied by a hasty repair of the Acropolis defenses, and here too material from the sack was used. Looking up from the Agora to the north face of the Acropolis, one sees a wall of strongly assorted stones irregularly packed together; but in the irregularity the eye focuses on a stretch of deliberate order. Near the top, imbedded in the wall, is a conspicuous line of thick column drums. They are not packed together at random; they are deliberately placed in line to catch the eye... This was their war memorial, and on the top the ruined temple remained in ruins.2
In order to reconcile the deliberate order with irregularity, Meiggs ascribes the former to the column drums and the latter to the walls, whereas Ross had maintained the opposite view. Ross was more on solid ground, since one may dispute whether the insertion of the column drums speeded or delayed the construction of the north wall, but it is certainty that the statement that one sees a wall of strongly assorted stones irregularly packed together is contrary to fact.
In my opinion, the fragments of the entablature of the Old Temple and the column drums, which actually, as I will explain later, were faulty pieces discarded during the construction of the Periclean Parthenon, were used because they were available and because they break the monotony of the wall. Any visitor to Athens can realize that the column drums give perspicuity, with their play of light and shadows, to the wall into why they are imbedded. The architects were faced with the problem that the north wall is much lower than the south wall; in order to correct the imbalance they broke the surface of the north wall, since a wall broken by lines appears bigger than a plain flat wall.
If Ross had been satisfied with claiming merely the honor of having discovered the foundations of a temple older than the Periclean Parthenon, that is, a proto-Parthenon, on top of which there was later erected the Periclean Parthenon, scholarship would have been spared more than a century of useless fabrications. For there is a clear indication of the date of construction of the substructure of the Parthenon. As I have already mentioned, the four top layers of the substructure of the Parthenon have been carefully dressed and finished with drafted margins, showing that they were intended to be visible, whereas the lower layers with open joints and faces not all of the same plane, were obviously intended to be covered. What covered the lower layers was the earth filling retained by the Cimonian Wall. Since a temple could not abut on a precipice, but had to be surrounded by a terrace, there was constructed a huge wall running to the south of the substructure, substantially parallel to it. This wall was a monumental enterprise in itself, well in proportion to the enterprise initiated with the construction of the substructure. This wall, which is the Cimonian Wall, delimited the entire south side of the Acropolis. It was built of poros blocks, receding one from the other in order to counteract the strong pressure of the earth fillings that were piled between it and the crest of the Acropolis. The Cimonian Wall was made to extend as far to the south as possible just south of the substructure, its two straight segments meeting at a very obtuse angle in correspondence with the southwest corner of the Parthenon, because here there was the greatest need of extending the surface of the Acropolis. At the same time the substructure itself made the extension possible by reducing the amount of loose filling pressing against the wall. Late we shall consider the stratigraphic data that link the Cimonian Wall with the substructure. Hence, the substructure and the Cimonian Wall were erected in the same period of time (after 468 B.C.).
The Cimonian Wall replaced the earlier southern wall of the Acropolis known as the Pelasgic Wall. In the area of the Parthenon the remains of the Pelasgic Wall have been traced as running an irregular course between the substructure and the Cimonian Wall; the SW corner of the Parthenon overlaps in part the Pelasgic Wall. The Pelasgic Wall shows signs of violent destruction; according to Walther Kolbe it was thrown down by the Persians; but it is possible that it was thrown down after the oligarchs led by Isagoras tried to use the Acropolis as a fortress against the supporters of the newly-established Athenian democracy (508 B.C.). The remains of the Pelasgic Wall helped in braking the weight of the earth filling pressing against the Cimonian Wall.
Many of these facts became known to Ross when he dug a trench from the middle of the substructure to the Cimonian Wall, but he did not want to draw the conclusion that the Cimonian Wall and the substructure belong together. Instead of claiming that he had discovered a proto-Parthenon of the Cimonian age (to which I shall refer as Parthenon I), he wanted to be remembered as the one who had discovered the temple destroyed by the Persians, a temple of greater historical significance. For this reason he accepted as a proven fact the tentative suggestion of Leake that the column drums of the north wall used to belong to the Old Temple, although Ross himself had noticed some of the fallacies in Leakes sequence of arguments. In order to prove that he had discovered the Old Temple, Ross needed to link the column drums of the north wall with the substructure; he satisfied this need by announcing triumphantly that in the earth filling to the east of the southeast corner of the substructure (just to the west of the present Acropolis Museum) he had found a number of column drums, 12 to 15, similar to those incorporated into the north wall. He could prove that the drums of the north wall and these drums belong together; but he had yet to prove that all these drums belong to the Old Temple.
Penrose, who was most sympathetic to Ross views, tried to give careful consideration to the matter of the drums. Ten years after Ross digging, he examined what could still be seen of the drums unearthed to the east of the substructure and also traced in the library of the Institute of British Architects a letter written by an eyewitness to their excavation, a Mr. Bracebridge. On the basis of these sources of information Penrose formulated the following conclusion:
One of the most remarkable parts of this excavation occurs immediately to the east of the Parthenon where remain a number of drums of columns, formed of Pentelic marble, in a more or less perfect state, some much shattered, others apparently rough from the quarry, others partly worked, and discarded in consequence of some defect on the material. The ground about them was strewed with marble chips, and some sculptors tools, and jars containing red color were found with them.
Penrose drew the necessary inference that the column drums found to the east of the substructure have nothing to do with Persian destruction but are remnants of the workshop of the Periclean Parthenon. Next, Penrose was bound to ask whether the drums of the north wall too were merely rejected frusta of the Periclean temple, like those just mentioned.
In order to answer this question, he proceeded to measure very carefully the drums of the north wall. He found that 13 of them have a diameter of 1899.8 mm and 5 a diameter of 1707.2 mm, which would indicate that the former were intended to be part of the peristyle of the Periclean Parthenon and the latter of its Western Porch. But Penrose felt that before accepting this conclusion he should proceed to a further measurement. He tested the curvature of the flutes of some of the drums of the north wall with a kymagraph and found it to be slightly different from the columns of the Periclean Parthenon. Hence, he concluded that the drums of the north wall could never have been intended for the Parthenon.
Penrose was factually incorrect. In his pursuit of exact measurements in archaeological documentation, he pushed the method to an irrational extreme. It is a basic rule of metrology that by measuring to a point of precision that is beyond what is warranted by the circumstances, one is led to assume discrepancies that do not occur. In the specific case before us, probably the drums of the north wall wee too worn out for a precise test of the curvature of the flutes, whereas Penrose had a paramount concern in convincing archaeologists of the usefulness of instruments such as the kymagraph. The solution of problems by the use of the graphic method was his hobby horse not only in archaeology, but also in his astronomical research.
Ross, without employing the misleading subtlety of Penrose, had concluded that the drums of the north wall have the same dimensions as those of the Periclean Parthenon. In his excavation he must have noticed the details mentioned by Penrose that indicate that the drums found near the substructure were remnants from a workshop. He himself reports that some of these drums were unfinished pieces, a fact which excludes the possibility of their having been used as pat of a standing temple destroyed by the Persians. Many of the drums of the north wall are unfinished pieces too.
It is true that all the drums in question show fissures, but Ross never explained why these fissures should be considered evidence of fire damage: in truth, fissures were the very reason why the drums were discarded during the construction of the Periclean Parthenon. Later I will report how Arnold Tschira proved that the fissures in the drums cannot be the result of fire damage.
As to the alleged calcification reported by Ross, one cannot either affirm or deny that it exists, because he never explained what he meant by calcification. There are no features in the drums that can be properly described as a process of calcification. Tschira has suggested that possibly Ross was referring to the areas where the Pentelic marble has acquired a granular appearance, because of the formation, under the effect of the elements, of crystals larger than the usual ones. In any case, Ross did not try to explain why the supposed calcification should be taken as evidence of fire damage.
It is important to notice that Penrose, who tried to measure with care the columns of the north wall, never mentioned the occurrence of traces of fire in them, although he noticed such traces in the fragments of the entablature embedded in the more western section of the same wall; the entablature used to be part of the temple destroyed by the Persians in 480 B.C., the Old Temple identified in Kavvadias excavations.
Penrose had come to the right conclusion that the drums found near the substructure were pieces rejected during the construction of Parthenon III. He failed to reach the same conclusion for the drums of the north wall; but this conclusion was arrived at by Emile Burnouf in 1877.
Ross had soundly proved that the substructure of the Periclean Parthenon had been originally planned for a different temple. But, in order to prove that this different temple was the Old Temple, he constructed two figments: one, based on a supposition of Leake, that the part of the north wall that incorporates the columns drums was erected by Themistocles immediately after the withdrawal of the Persians; the other that these column drums and those found near the substructure show traces of fire. The paradoxical result has been that these two figments were accepted as truth and are still strenuously supported by the majority of scholars, whereas Ross did not meet much success in convincing other scholars of his main contention, which is fully supported by factual evidence. In 1891 Penrose was the only scholar who agreed with Ross that under the platform of the Periclean Parthenon there are remains of an earlier temple | http://www.metrum.org/key/athens/ludwig.htm | 13 |
18 | Almost every undergraduate introductory economics course begins the same way: with the definition of economics. Economics is the study of how people use scarce resources to satisfy unlimited wants. At the core of economics is the idea that our world is a place plagued with scarcity that is, we do not have all the resources we want. As a result, we must make choices. When we make a choice, that choice necessarily means that we have to give up something. The something we give up is called opportunity cost. Economists define opportunity cost as the next best alternative or the highest valued alternative to the choice that was made. If we choose to produce a good using a resource, the opportunity cost of producing that good is the highest valued alternative use of that resource.
Economics The study of how individuals and society make decisions about how to use scarce resources to satisfy unlimited material wants.
Scarcity The condition that exists when there are not enough resources to satisfy all the wants of individuals or society.
Choices The decisions individuals and society make about the use of scarce resources.
Opportunity Costs The next highest valued alternative that is given up when a choice is made.
Why is it important to teach students about opportunity cost, scarcity, and choice in the K-12 classroom? These concepts can be thought of as the core of capable decision-making. If we teach our students, beginning at an early age, the critical thinking skills to analyze problems and make informed decisions about their use of time and money, they are likely to be better students, save more over their lifetimes, and choose life paths that result in higher standards of living. As schools look to teach their students more about personal finance topics such as budgeting and saving, equipping students with a strong understanding of opportunity cost, scarcity, and choice is essential.
Teaching these concepts from an early age as a progression from kindergarten through senior year in high school is important. In the lowest grades, students can identify two alternatives, the choice they would make between them, and the opportunity cost of their decision. In upper elementary school, students can use the PACED decision-making model to decide between more than two alternatives. In the PACED model, students learn to identify the problem (P) or decision they have to make, list the alternatives (A) available to them, identify a set of criteria (C) they can use to evaluate the different alternatives, evaluate (E) the alternatives based on the criteria, and make a decision (D) between the alternatives. The students are then asked to identify the opportunity cost the next best alternative of the choice they made.
By middle and high school, students should be able to identify more complex opportunity cost problems and make use of a production possibilities curve to show how production in a two-good economy is allocated. Discussions of opportunity cost in the high school classroom can be used to address pressing current events. For example, you might ask your students to assess this situation: driving five miles to a gas station that sells gasoline for 5 cents cheaper or going to the gas station around the corner. They can discuss how to identify the opportunity cost associated with buying the cheaper gas. When looking at environmental issues, you can ask your students to research municipal recycling programs and identify the opportunity cost associated with a town’s adopting a mandatory recycling program.
The connections with personal finance issues are some of the most important contexts in which students can use opportunity cost. Teaching middle and high school students to budget and make realistic spending decisions are important. Doing so also lends itself well to discussions of opportunity cost and choice. Most household budgets require individuals and the household to make tradeoffs between different things on which to spend household income. With sound decision-making skills that are well grounded in the concept of opportunity cost, our young people can be expected to make more thoughtful budget decisions as they go off to college and the world of work.
Andrew T. Hill, Ph.D.
Michael Munger, chair of political science at Duke University, in his online article “A Fable of the OC,” published at the Library of Economics and Liberty, provides some fascinating insights into opportunity cost.
Munger, Michael. “The Fable of the OC,” Library of Economics and Liberty (2006). www.econlib.org/library/Columns/y2006/Mungeropportunitycost.html
Russian economic educator Liudmila Guinkel has written an innovative high school lesson entitled “Scarcity and Choice.” Written in English and developed as part of the National Council on Economic Education’s Training of Writers program, Guinkel’s lesson provides an active-learning format for teaching about scarcity, opportunity cost, choice, and the production possibilities frontier. The lesson can be downloaded at www.ncee.net/ei/lessons/OldMac/lesson5/ .
The National Council on Economic Education offers a free sample lesson from its publication Focus: EconomicsGrades 3-5. The “Back-to-School Scarcity” lesson for elementary school uses active-learning methodologies to engage students in using a decision-making grid to choose between alternatives. The lesson can be found online at www.ncee.net/resources/lessons/download.php?durl=focus35_lesson2.pdf .
The PACED decision-making model uses a five-step process and a goal to guide the decision-making process. In primary grades, students can use + and - or 0 and 1 in the grid to indicate whether or not an alternative does or does not meet each of the criteria. The answers are then summed for each alternative and a decision is made. In later grades, a numbered scale (e.g., 0 to 3) could be used to indicate how well each alternative meets the criteria. After summing the results, the students indicate the choice they would make and then identify the opportunity cost of their decision.
Steps in the PACED model:
Students will understand that productive resources are limited. Therefore, people cannot have all the goods and services they want. As a result, they must choose some things and give up others. | http://www.philadelphiafed.org/education/teachers/publications/intersections/2006/Spring/opportunity-cost.cfm | 13 |
42 | A syllogism is composed of two statements, from which a third one, the conclusion, is inferred. Categorical syllogisms are syllogisms made up of three categorical propositions. They are a type of deductive argument, that is, the conclusion (provided the argument form is valid) follows with necessity from the premises. Here are two examples:
(1) All Greeks are mortal. All Athenians are Greeks. Therefore all Athenians are mortal. (2) All mammals are animals. All humans are mammals. Therefore all humans are animals.
Such arguments were formulated by ancient Greek logicians and have been used by logicians ever since. Hence the trite examples. Both of these categorical syllogisms have the same form. Each one has two premises and a conclusion. The first premise in a standard form categorical proposition is the major premise; the second is the minor premise. The two premises share a common term, called the middle term. In the first example, the middle term is "Greeks"; in the second, "mammals". Since each one has the middle term in common, we cannot distinguish between the premises by means of the middle term. What indicates that the first premise is the major premise is the presence of the predicate term of the conclusion: "mortal" in the first example; "animals" in the second. Similarly, the minor premise contains the subject term of the conclusion--"Athenians" and "humans" respectively. The form of these two syllogisms--and of every other Figure 1 (figure will be explained below) standard form categorical syllogism--can be easily displayed:
Major Premise: Middle Term Predicate Term Minor Premise: Subject Term Middle Term Conclusion: Subject Term Predicate Term
Moreover, each of the three propositions in each example is an A proposition: All S are P. Thus we can display the form again, calling attention not only to the position of the terms, but also to the kind of propositions used:
Major Premise: All M are P. Minor Premise: All S are M. Conclusion: All S are P.
NOTE: Exercise 2 provides you an opportunity to analyze categorical syllogisms.
Every standard form categorical syllogism will have three terms, with each one used twice in the three propositions which make up the syllogism. The predicate term will be used in the major premise and the conclusion, the subject term in the minor premise and conclusion and the middle term in the two premises. The arrangement of the four propositions--A, E, I or O--determines the mood, or ordering of the three propositions which make up the syllogism. A syllogism with all A propositions, such as those above, is one in mood AAA. One with E propositions as the major premise and conclusion and an I proposition as the minor premise would be in mood EIE. Thus the order of propositions determines the mood of a categorical syllogism. Since there are four kinds of categorical propositions and three propositions in each syllogism, there are 64 possible syllogistic moods. Moreover, there are 16 possible arrangements of the four kinds of propositions with each A, E, I or O proposition serving as the major premise:
AAA EAA IAA OAA AAE EAE IAE OAE AAI EAI IAI OAI AAO EAO IAO OAO AEA EEA IEA OEA AEE EEE IEE OEE AEI EEI IEI OEI AEO EEO IEO OEO AIA EIA IIA OIA AIE EIE IIE OIE AII EII III OII AIO EIO IIO OIO AOA EOA IOA OOA AOE EOE IOE OOE AOI EOI IOI OOI AOO EOO IOO OOO
These 64 moods can be arranged in four figures, with the figure being determined by the position of the middle term. Since the middle term cannot occur in the conclusion, there are only four possible arrangements of the terms: the middle term can be the subject or predicate of the major premise or the subject or predicate of the minor premise. The usual arrangement of these four figures is this:
M P P M M P P M (1) S M (2) S M (3) M S (4) M S --- --- --- --- S P S P S P S P
Since there are 64 moods and four figures, there are 256 possible categorical syllogisms. Each of these 256 syllogisms are distinguished from one another by a distinct mood and figure. Examples (1) and (2) above are AAA-1 categorical syllogisms. Their mood is AAA and their figure is the first one.
NOTE: Exercise 3 provides practice in identifying the mood and figure of syllogisms and constructing syllogisms.
Back to Logic home page
Copyright © 1999, Michael Eldridge | http://www.philosophy.uncc.edu/mleldrid/Logic/l02.html | 13 |
23 | Iterators are pointer-like objects, used to cycle through the elements stored in a container.
A range is a sequence of values held in a container. The range is described by a pair of iterators, which define the beginning and end of the sequence.
When iterators are used to describe a range of values in a container, it is assumed (but not verified) that the second iterator is reachable from the first. Errors will occur if this is not true.
Because ordinary pointers have the same functionality as random access iterators, most of the generic algorithms in the standard library can be used with conventional C++ arrays, as well as with the containers provided by the standard library.
A number of the generic algorithms manipulate two parallel sequences. Frequently the second sequence is described using only a beginning iterator, rather than an iterator pair. It is assumed, but not checked, that the second sequence has at least as many elements as the first.
The function randomInteger described here is used in a number of the example programs presented in later sections.
An input stream iterator permits an input stream to be read using iterator operations. An output stream iterator similarly writes to an output stream when iterator operations are executed.
The class definitions for unary_function and binary_function can be incorporated by #including functional.
A more complex illustration of the use of a function object occurs in the radix sorting example program given as an illustration of the use of the list data type in Section 6.3. In this program references are initialized in the function object, so that during the sequence of invocations the function object can access and modify local values in the calling program.
The idea described here by the term binder is in other contexts often described by the term curry. This is not, as some people think, because it is a hot idea. Instead, it is named after the computer scientist Haskell P. Curry, who used the concept extensively in an influential book on the theory of computation in the 1930's. Curry himself attributed the idea to Moses Sch_nfinkel, leaving one to wonder why we don't instead refer to binders as "Sch_nfinkels."
Elements that are held by a vector must define a default constructor (constructor with no arguments), as well as a copy constructor. Although not used by functions in the vector class, some of the generic algorithms also require vector elements to recognize either the equivalence operator (operator ==) or the relational less-than operator (operator <).
Because it requires the ability to define a method with a template argument different from the class template, some compilers may not yet support the initialization of containers using iterators. In the mean time, while compiler technology catches up with the standard library definition, the Rogue Wave version of the standard library will support conventional pointers and vector iterators in this manner.
A vector stores values in a single large block of memory. A deque, on the other hand, employs a number of smaller blocks. This difference may be important on machines that limit the size of any single block of memory, because in such cases a deque will be able to hold much larger collections than are possible with a vector.
Even adding a single element to a vector can, in the worst case, require time proportional to the number of elements in the vector, as each element is moved to a new location. If insertions are a prominent feature of your current problem, then you should explore the possibility of using containers, such as lists or sets, which are optimized for insert operations.
Once more, it is important to remember that should reallocation occur as a result of an insertion, all references, pointers, and iterators that denoted a location in the now-deleted memory block that held the values before reallocation become invalid.
Note that count() returns its result through an argument that is passed by reference. It is important that this value be properly initialized before invoking this function.
Source for this program is found in the file sieve.cpp.
Note that if you declare a container as holding pointers, you are responsible for managing the memory for the objects pointed to. The container classes will not, for example, automatically free memory for these objects when an item is erased from the container.
Unlike a vector or deque, insertions or removals from the middle of a list will not invalidate references or pointers to other elements in the container. This property can be important if two or more iterators are being used to refer to the same container.
The searching algorithms in the standard library will always return the end of range iterator if no element matching the search condition is found. Unless the result is guaranteed to be valid, it is a good idea to check for the end of range condition.
The executable version of the widget works program is contained in file widwork.cpp on the distribution disk.
The complete radix sort program is found in the file radix.cpp in the tutorial distribution disk.
Although the abstract concept of a set does not necessarily imply an ordered collection, the set data type is always ordered. If necessary, a collection of values that cannot be ordered can be maintained in, for example, a list.
In other programming languages, a multiset is sometimes referred to as a bag.
As we noted in the earlier discussion on vectors and lists, the initialization of containers using a pair of iterators requires a mechanism that is still not widely supported by compilers. If not provided, the equivalent effect can be produced by declaring an empty set and then using the copy() generic algorithm to copy values into the set.
If you want to use the pair data type without using maps, you should include the header file named utility.
Unlike a vector or deque, the insertion or removal of values from a set does not invalidate iterators or references to other elements in the collection.
This program can be found in the file spell.cpp in the tutorial distribution.
In other programming languages, a map-like data structure is sometimes referred to as a dictionary, a table, or an associative array.
See the discussion of insertion in Section 8 for a description of the pair data type.
Unlike a vector or deque, the insertion or removal of elements from a map does not invalidate iterators which may be referencing other portions of the container.
The complete example program is included in the file tutorial tele.cpp in the distribution.
The executable version of this program is found in the file graph.cpp on the tutorial distribution disk.
An executable version of the concordance program is found on the tutorial distribution file under the name concord.cpp.
A stack is sometimes referred to as a LIFO structure, and a queue is called a FIFO structure. The abbreviation LIFO stands for Last In, First Out. This means the first entry removed from a stack is the last entry that was inserted. The term FIFO, on the other hand, is short for First In, First Out. This means the first element removed from a queue is the first element that was inserted into the queue.
Note that on most compilers it is important to leave a space between the two right angle brackets in the declaration of a stack; otherwise they are interpreted by the compiler as a right shift operator.
This program is found in the file calc.cpp in the distribution package.
A more robust program would check to see if the stack was empty before attempting to perform the pop() operation.
The complete version of the bank teller simulation program is found in file teller.cpp on the distribution disk.
The term priority queue is a misnomer, in that the data structure is not a queue, in the sense that we used the term in Section 10, since it does not return elements in a strict first-in, first-out sequence. Nevertheless, the name is now firmly associated with this particular data type.
As we noted in earlier sections, support for initializing containers using a pair of iterators requires a feature that is not yet widely supported by compilers. While we document this form of constructor, it may not yet be available on your system.
Information on Heaps. Details of the algorithms used in manipulating heaps will not be discussed here, however such information is readily available in almost any textbook on data structures.
We describe the priority queue as a structure for quickly discovering the largest element in a sequence. If, instead, your problem requires the discovery of the smallest element, there are various possibilities. One is to supply the inverse operator as either a template argument or the optional comparison function argument to the constructor. If you are defining the comparison argument as a function, as in the example problem, another solution is to simply invert the comparison test.
Other example programs in this tutorial have all used containers to store values. In this example the container will maintain pointers to values, not the values them-selves. Note that a consequence of this is that the programmer is then responsible for managing the memory for the objects being manipulated.
The complete event simulation is found in the file icecream.cpp on the distribution disk.
In the remainder of this section we will refer to the string data type, however all the operations we will introduce are equally applicable to wide strings.
Remember, the ability to initialize a container using a pair of iterators requires the ability to declare a template member function using template arguments independent of those used to declare the container. At present not all compilers support this feature.
Note that the contents of an iterator are not guaranteed to be valid after any operation that might force a reallocation of the internal string buffer, such as an append or an insertion.
Although the function is accessible, users will seldom invoke the member function compare() directly. Instead, comparisons of strings are usually performed using the conventional comparison operators, which in turn make use of the function compare().
The split function can be found in the concordance program in file concord.cpp.
The sample programs described in this section can be found in the file alg1.cpp.
The initialization algorithms all overwrite every element in a container. The difference between the algorithms is the source for the values used in initialization. The fill() algorithm repeats a single value, the copy() algorithm reads values from a second container, and the generate() algorithm invokes a function for each new value.
The result returned by the copy() function is a pointer to the end of the copied sequence. To make a catenation of values, the result of one copy() operation can be used as a starting iterator in a subsequent copy().
In the copy_backwards algorithm, note that it is the order of transfer, and not the elements themselves that is "backwards"; the relative placement of moved values in the target is the same as in the source.
A number of algorithms operate on two parallel sequences. In most cases the second sequence is identified using only a starting iterator, not a starting and ending iterator pair. It is assumed, but never verified, that the second sequence is at least as large as the first. Errors will occur if this condition is not satisfied.
The example functions described in this section can be found in the file alg2.cpp.
The searching algorithms in the standard library all return the end-of-sequence iterator if no value is found that matches the search condition. As it is generally illegal to dereference the end-of-sequence value, it is important to check for this condition before proceeding to use the result of a search.
These algorithms perform a linear sequential search through the associated structures. The set and map data structures, which are ordered, provide their own find() member functions, which are more efficient. Because of this, the generic find() algorithm should not be used with set and map.
The basic_string class provides its own versions of the find_first_of and find_end algorithms, including several convience overloads of the basic pattern indicated here.
In the worst case, the number of comparisons performed by the algorithm search() is the product of the number of elements in the two sequences. Except in rare cases, however, this worst case behavior is highly unlikely.
As with search, n the worst case, the number of comparisons performed by the algorithm find_end() is the product of the number of elements in the two sequences.
The maximum and minimum algorithms can be used with all the data types provided by the standard library. However, for the ordered data types, set and map, the maximum or minimum values are more easily accessed as the first or last elements in the structure.
The example functions described in this section can be found in the file alg3.cpp.
While there is a unique stable_ partition() for any sequence, the partition() algorithm can return any number of values. The following, for example, are all legal partitions of the example problem.
2 4 6 8 10 1 3 5 7 9
10 8 6 4 2 5 7 9 3 1
2 6 4 8 10 3 5 7 9 1
6 4 2 10 8 5 3 7 9 1.
Permutations can be ordered, with the smallest permutation being the one in which values are listed smallest to largest, and the largest being the sequence that lists values largest to smallest. Consider, for example, the permutations of the integers 1 2 3. The six permutations of these values are, in order:
1 2 3
1 3 2
2 1 3
2 3 1
3 1 2
3 2 1
Notice that in the first permutation the values are all ascending, while in the last permutation they are all descending.
The algorithms in this section set up a sequence so that the desired elements are moved to the front. The remaining values are not actually removed, but the starting location for these values is returned, making it possible to remove these values by means of a subsequent call on erase(). Remember, the remove algorithms do not actually remove the unwanted elements.
The example functions described in this section can be found in the file alg4.cpp.
The example functions described in this section can be found in the file alg5.cpp.
Note that if your compiler does not support partial specialization then you will not have the versions of the count() algorithms that return the sum as a function result, but instead only the versions that add to the last argument in their parameter list, which is passed by reference. This means successive calls on these functions can be used to produce a cumulative sum. This also means that you must initialize the variable passed to this last argument location prior to calling one of these algorithms.
By substituting another function for the binary predicate, the equal and mismatch algorithms can be put to a variety of different uses. Use the equal() algorithm if you want a pairwise test that returns a boolean result. Use the mismatch() algorithm if you want to discover the location of elements that fail the test.
The example functions described in this section can be found in the file alg6.cpp.
The function passed as the third argument is not permitted to make any modifications to the sequence, so it can only achieve any result by means of a side effect, such as printing, assigning a value to a global or static variable, or invoking another function that produces a side effect. If the argument function returns any result, it is ignored.
The example programs described in this section have been combined and are included in the file alg7.cpp in the tutorial distribution. As we did in Section 13, we will generally omit output statements from the descriptions of the programs provided here, although they are included in the executable versions.
Yet another sorting algorithm is provided by the heap operations, to be described in Section 14.8.
Note that an ordered collection is a heap, but a heap need not necessarily be an ordered collection. In fact, a heap can be constructed in a sequence much more quickly than the sequence can be sorted.
This program can be found in the file exceptn.cpp in your code distribution.
You can find this program in the file autoptr.cpp in the turorial distribution.
Note that, with the exception of the member functions real() and imag(), most operations on complex numbers are performed using ordinary functions, not member functions.
This program is found in the file complx.cpp in the distribution.
For reasons of compatibility, the numeric_limits mechanism is used as an addition to the symbolic constants used in older C++ libraries, rather than a strict replacement. Thus both mechanisms will, for the present, exist in parallel. However, as the numeric_limits technique is more uniform and extensible, it should be expected that over time the older symbolic constants will become outmoded.
©Copyright 1996, Rogue Wave Software, Inc. | http://www.math.hkbu.edu.hk/parallel/pgi/doc/pgC++_lib/stdlibug/sidebar1.htm | 13 |
18 | We have seen what inductive arguments are. Inductive arguments are those whose conclusion cannot be conclusive because they state more than what is said in the premises. Here we are going to discuss two kinds of inductive arguments -- arguments by analogy and induction by enumeration
Argument by Analogy
Consider the following argument:
Somsri and Somporn share a number of characteristics -- (1) they are from the same high school; (2) they love studying mathematics; (3) they got about the same GPA at high school. We know that Somsri has a good GPA in the university, so Somporn will have a good GPA at her university too.
How confident are we that Somporn will have a good GPA as does Somsri? Suppose we do not know Somporn's GPA yet. But since we know something about her, and especially about her similarities with Somsri, then we believe we have some right in concluding that about Somporn too. But since the information about Somporn's GPA is not already included in the premises, we cannot be one hundred percent certain. At most the conclusion is very likely, but we cannot be completely sure.
This argument is an example of arguments from analogy. It can be stated in the following formula:
A and B share the following characteristics -- a, b, c, d
A has the special characteristic e.
Therefore, B has characteristic e too.
We conclude that B has e from the facts that A and B both have a, b, c, and d. You can see that this conclusion can never by validly inferred because we can never be completely certain whether B really has e or not since it is not stated in the premises. (Premises are what we know to be true, and the conclusion is what we want to infer or draw out from them.) Thus arguments by analogy are clearly inductive.
Although we cannot be completely certain that B has e or not, we have some means to decide how much confidence we want to put to the conclusion. So the question is: What are the conditions which will make us more confident that Somporn will have a good GPA too? Firstly and most importantly, there must be some logical connection between the characteristics stated in the premises and the concluded characteristic. If there is no connection whatsoever between Somporn's and Somsri' graduating from the same high school, loving mathematics, having the same GPA in high school, and their having the same GPA in university, then we cannot put much confidence in the conclusion. On the other hand, if there is some connection. That is to say, if we know that anyone, or a large percentage of those who love mathematics usually have good GPA's then we will be more likely to be confident that the conclusion is true.
But how do we know that those who love mathematics are more likely to earn high GPA's? From experience, of course. This is an important in evaluating inductive arguments. Since we cannot rely on what is stated in the premises in evaluating such arguments, we have to rely on our own general knowledge. There are no mechanical means to decide how much confidence we have in the conclusions of inductive arguments. So we have to use experience, background knowledge, common sense, and the like to help us decide how likely the conclusion is to be true. Thus, when you evaluate inductive arguments, what you do is that you have to proceed on a case by case basis. Many logicians have tried to provide general rules, which are supposed to be more or less mechanical, to decide the likeliness of the conclusion, but such attempts mostly fail. And the only recourse left is to consider the arguments one by one, using our own background knowledge and common sense as the case in question requires.
Back to Top
This type of arguments is familiar to every student of statistics. Actually it is a very basic kind of statistical thinking, and shows that statistics as a whole is just a branch of inductive thinking. Statistics is a very complex discipline, but in its essence it is a way for us to know more than what our evidence gives us. This very basic statistics is present in argument employing induction by enumeration. It is a projection of a ratio from observed samples to the conclusion of the same ratio in the population we want to know about. For example, consider this argument:
Four out of five of the oranges in this basket are sweet. (I tasted them myself.) Consequently, 80 percent of all the oranges in this basket are sweet.
This is an induction by enumeration. As you have studied in statistics, there are two major caveats that you have to realize in order to find out how likely the conclusion is to be true --
Thus, if there are about twenty oranges in the basket, tasting five of them can give you some degree of confidence. But if there are two hundred, then, everything being equal, then we are less likely to believe in the conclusion. Furthermore, if the oranges I tasted were not well distributed. For example, if the merchant has put all the few sweet oranges on top of the sour ones, then I will be less likely to be correct in believing the conclusion if I pick only the oranges from the top.
Back to Top
Back to the Index Page | Go to Week Five | http://pioneer.chula.ac.th/~hsoraj/PhilandLogic/WeekFour.html | 13 |
16 | Senate Legislative Process
Chapter 1: Overview: The Legislative Framework in the Senate
Chapter 2: Committee Organization and Procedure
Chapter 3: Senate Floor Procedure
Chapter 4: Resolving Differences with the House
Chapter 5: Enrollment and Presidential Action
Overview: The Legislative Framework in the Senate
Origin of the Senate: The Great Compromise
On July 16, 1787, the 55 Founding Fathers meeting in Philadelphia reached what is commonly called the "Great Compromise." The compromise emerged from the struggle between the large states and the small states over the apportionment of seats in the Congress. The framers easily accepted by principle of bicameralisma two-house national legislaturebut disagreed strongly over how each chamber would be constituted. This was the most contentious issue at the Constitutional Convention and nearly led to its dissolution. The large states favored the "nationalist" principle of popularly-based representation, but the smaller states insisted on a "federal" principle ensuring representation by states. The smaller states feared that if representation was based on population, the larger states would quickly dominate the new Congress.
In the end, the framers reached an agreement: House seats would be apportioned among the states based on population and representatives would be directly elected by the people; the Senate would be composed of two senators per stateregardless of size or populationindirectly elected by the state legislature. As James Madison wrote in Federalist No. 39, "The House of Representatives will derive its powers from the people of America....The Senate, on the other hand, will derive its powers from the States, as political and co-equal societies; and these will be represented on the principle of equality in the Senate." The principle of two senators from each state was further guaranteed by Article V of the Constitution: "no State, without its Consent, shall be deprived of equal Suffrage in the Senate."
Decisions made at the Constitutional Convention about the Senate still shape its organization and operation today, and make it unique among national legislative institutions. William E. Gladstone, four-time British Prime Minister during the 19th century, said the United States Senate, is a "remarkable body, the most remarkable of all the inventions of modern politics." Plainly, the framers did not want the Senate to be another House of Representatives, and the institutional uniqueness of the upper house flows directly from the decisions they made at the Constitutional Convention.
Several of those constitutional decisions led to important and enduring features of the Senate and its legislative process. These features include constituency, size, term of office, and special prerogatives.
The one state - two senator formula means that all senators represent constituencies that are more heterogeneous than the districts represented by most House members. As a result, senators must accommodate a larger range of interests and pressures in their representational roles. Further, because each senator has an equal vote regardless of his or her state's population, the Senate remains an oddly apportioned institution: senators from the twenty-six smallest states, who (according to the 2000 census) represent 17.8% of the nation's population, constitute a majority of the Senate—a reality which has aroused little public interest or concern.
The framers, of course, could not have foreseen the country's population increases, migratory patterns, or huge disparities in state sizes. While members from small and large states all have comparable committee and floor responsibilities, few are likely to deny that senators from the more populous states, such as California, face a broader array of representational pressures than lawmakers from the smaller states, such as Wyoming. An indirect effect of Senate apportionment, some scholars contend, is that contemporary floor leaders of either party are likely to come from smaller rather than larger states because they can better accommodate the additional leadership workload.
The one state - two senator formula also meant that from the outset the Senate's membership was relatively small compared to the House. When it first convened it March 1789, there were 22 senators (North Carolina and Rhode Island soon entered the Union to increase the number to 26). As new states entered the Union, the Senate's size expanded to the 100 that it is today.
The Senate's relatively small size has significantly shaped how it works. In the smaller and more intimate Senate, vigorous leadership has been the exception rather than the rule. The relative informality of Senate procedures testifies to the looser reins of leadership. Significantly, there is large deference to minority views and all senators typically have ample opportunities to be heard on the issues of the day. Compared with the House's complex rules and voluminous precedents, the Senate's rules are brief and often set aside. Informal negotiations among senators interested in a given measure are commonplace. Although too large for its members to draw their chairs around the fireplace on a chilly winter morning—as they did in the early years—the Senate today retains a clubby atmosphere that the House lacks.
Term of Office, Qualifications, and Selection
A key goal of the framers was to create a Senate differently constituted from the House so it would be less subject to popular passions and impulses. "The use of the Senate," wrote James Madison in Notes of Debates in the Federal Convention of 1787, "is to consist in its proceedings with more coolness, with more system and with more wisdom, than the popular branch." An oft-quoted story about the "coolness" of the Senate involves George Washington and Thomas Jefferson, who was in France during the Constitutional Convention. Upon his return, Jefferson visited Washington and asked why the Convention delegates had created a Senate. "Why did you pour that tea into your saucer?" asked Washington. "To cool it," said Jefferson. "Even so," responded Washington, "we pour legislation into the senatorial saucer to cool it."
To foster values such as deliberation, reflection, continuity, and stability in the Senate, the framers made several important decisions. First, they set the senatorial term of office at six years even though the duration of a Congress is two years. The Senate, in brief, was to be a "continuing body" with one-third of its membership up for election at any one time. As Article I, section 3, states: "Immediately after they shall be assembled in consequence of the first election, they shall be divided as equally as may be into three classes." Second, to be a senator, individuals had to meet different qualifications compared to service in the House of Representatives. To hold office, senators have to be at least 30 years of age and nine years a citizen; House members are to be 25 years and seven years a citizen. Senators, in brief, were to be more seasoned and experienced than representatives. Finally, the indirect election of senators by state legislatures would serve to check precipitous decisions which might emanate from the directly elected House and buttress the states' role as a counterweight to the national government.
Direct election of senators came with the 17th Amendment, ratified in 1913. A byproduct of the Progressive movement, it was designed to end corruption in state legislatures (involving the purchase of Senate seats), blunt the power of party machine bosses and corporations, prevent deadlocks in the election of senators, and make senators directly answerable to the people for their actions and decisions.
Although the House and Senate share all lawmaking authority, including overriding presidential vetoes, the framers assigned special prerogatives to the Senate. Under the Constitution's "advice and consent" provisions (Article II, section 2), only the Senate considers the ratification of treaties (which requires a two-thirds vote) and presidential appointments for such positions as federal judgeships, ambassadorships, or Cabinet offices (all of which require a majority vote for approval). The framers entrusted "advice and consent" duties exclusively to the Senate in part because they expected these matters to be handled in a thoughtful and responsible manner. The qualities they embedded in a continuing body—stability, experience, and a longer perspective—were valuable in handling issues involving national security and international relations. The Senate's role in the appointments process, wrote Alexander Hamilton in
Federalist No. 76, would serve as "an excellent check upon the spirit of favoritism in the President, and would tend greatly to preventing the appointment of unfit characters from State prejudice, from family connection, from personal attachment, or from a view to popularity."
The Constitution (Article I, section 3) also grants the Senate the "sole Power to try all Impeachments." The House possesses the constitutional authority to decide by majority vote whether to impeach (or indict) executive or judicial officials while the Senate, by a two-thirds vote, determines whether to convict and remove from office any impeached official. "Where else," wrote Alexander Hamilton in Federalist No. 65, "than in the Senate could have been found a tribunal sufficiently dignified, or sufficiently independent? What other body would be likely to feel confidence enough in its own situation, to preserve unawed and uninfluenced the necessary impartiality between an individual accused, and the representatives of the people, his accusers?" (Italics in original)
Understandably, the Senate's constitutional origins continue to shape its organization and operations. Three features in particular are noteworthy, because they contribute to making the Senate the unique institution that it is. They are extended debate, the lack of formal party leaders until the early 1900s, and the use of unanimous consent to conduct most of its business.
All senators have two traditional freedoms that, so far as is known, no other legislators worldwide possess. These two freedoms are unlimited debate and an unlimited opportunity to offer amendments, relevant or not, to legislation under consideration. The small size of the Senate permitted these traditional freedoms to emerge and flourish, subject to very few restrictions. Not until 1917 did the Senate adopt its first cloture rule (Rule XXII). Thus, from 1789 until 1917, there was no way for the Senate to terminate extended debates (called "filibusters" if employed for dilatory purposes) except by unanimous consent, compromise, or exhaustion.
Throughout the 19th century, many senators were called "leaders" by their colleagues, commentators, scholars, or others. But no single senator exercised central management of the legislative process in the manner of today's floor leader. As late as 1885, Woodrow Wilson could write in his classic study, Congressional Government, "No one is the Senator.... No one exercises the special trust of acknowledged leadership." (Italics in original) No doubt the small size of the early Senate and the tradition of viewing members as "ambassadors" from sovereign states promoted an informal and personal style of senatorial leadership. Although the general scholarly consensus is that certain senators began to function formally as party leaders in the early 1900s, the minutes of the respective party caucuses indicate that Democrats officially elected their "leader" in 1920; Republicans followed suit five years later. Floor leaders acquired procedural resources over time, such as their right of preferential recognition, which helped them to manage the Senate's work. However, their formal powers are limited and many floor leaders have said that their job is akin to "herding cats."
From its beginning, the Senate has transacted much of its business by unanimous consent. The Senate's small size, few rules, and informality encouraged the rise of this practice. A single objection ("I object") blocks a unanimous consent request. Even several of the Senate's early rules incorporated unanimous consent provisions to speed the Senate's routine business.
Two types of unanimous consent are prevalent in today's Senate. Simple unanimous consent requests deal with noncontroversial matters, such as senators asking unanimous consent to dispense with the reading of amendments. Complex unanimous consent agreements establish a tailor-made procedure for considering virtually any kind of business that the Senate takes up. They are commonly brokered by the parties' floor leaders and managers. Two fundamental objectives of these accords are to limit debate and to structure the amendment process. As two Senate parliamentarians wrote in the Senate's volume of precedents: "Whereas Senate Rules permit virtually unlimited debate, and very few restrictions on the right to offer amendments, these [unanimous consent] agreements usually limit debate and the right of Senators to offer amendments."
If the Founding Fathers visited the modern Senate, they would find that most of their fundamental principles continue to guide its legislative process. The direct election of senators is probably the most significant constitutional change to their handiwork. On the other hand, the "changing Senate" might surprise some of the framers. Senators, for example, typically attract large media attention, especially compared to most House members. One result is that the Senate has been an "incubator" for presidential contenders. The practice of "holds" (requests by senators to party leaders to delay floor consideration of legislation or nominations), which is nowhere recognized in Senate rules or precedents and about which little is known with respect to its origins, has become a prominent feature of today's Senate. Despite these and many other developments, the Senate remains the preeminent legislative forum for protecting political minorities and debating and refining the great issues of the day.
Committee Organization and Procedure
Committees in the Senate have the power to conduct hearings and investigations, to draft bills and resolutions (and amendments to them), to report legislation to the Senate for its possible consideration, and to conduct oversight of the executive branch. Senate committees also have the power to originate legislation. Additionally, Senate committees consider treaties and nominations in the course of the Senate's exercise of its constitutional authority of "advice and consent."
Committee Assignments. Committee assignments serve an important purpose in each senator's pursuit of legislative, representational, and other goals. They are also important to party leaders who organize and shape the composition of the committees. Senate rules prescribe the size of each committee. Committee party ratios generally reflect party strength in the chamber. Adjustments to committee size and ratio often result from interparty negotiations before each Congress. Senate rules specify certain procedures for making committee assignments. The rules of the party conferences supplement the Senate rules, providing more specific criteria for making committee assignments.
Senate rules categorize standing and other committees for the purpose of distributing committee assignments to senators. Essentially, each senator is limited to service on two "A" committees, and on one "B" committee. Assignment to "C" committees is unrestricted. Party rules also restrict senators' service on so-called "Super A" committees. Additionally, these service rules may be waived individually or collectively, as the Senate (and its parties) think necessary.
Jurisdiction and Referral. Committee jurisdiction is determined by Senate rules, supplemented by formal agreements among committees and precedents established by prior referrals. Senate Rule XXV identifies the policy topics handled by each standing committee. The formal responsibility for referral rests with the presiding officer of the Senate, but in practice the Senate parliamentarian advises on bill referrals. Measures are generally referred to a single committee based on "the subject matter which predominates." By unanimous consent, the Senate permits multiple referrals, either joint or sequential, for measures that cross jurisdictional boundaries. Multiple referrals may also be accomplished by motion of the joint party leaders, although it appears that this motion has never been used.
Subcommittees. A subcommittee is a subunit of a committee established for the purpose of dividing and managing a committee's work. Unlike the House, the Senate places no direct limits on the number of subcommittees that a committee may create, and there are no requirements to create any subcommittees.
However, both Senate and Republican Conference rules limit the number of subcommittee assignments per senator. Under Senate Rule XXV, a senator may sit on no more than three subcommittees on each of his class "A" committee assignments, and on no more than two subcommittees on a class "B" committee. A Senate standing order also encourages Senate committees to adopt rules for equitable assignment of senators to subcommittees. Several committees have adopted such provisions, which prohibit a senator's assignment to a second subcommittee until all committee members have chosen one assignment in the order of their seniority. As with full committee assignment limits, subcommittee assignment limits can be waived.
Committee Rules. As agents of the Senate, committees must comply with all applicable Senate directives. Most of these requirements appear in Senate Rule XXVI. Each Senate committee must adopt (and publish in the Congressional Record), written rules to govern its proceedings "not inconsistent with the Rules of the Senate." These committee rules generally dictate the procedures a committee follows in conducting its business. For example, committees must select a regular meeting day, which must be at least monthly, and determine appropriate quorums for various committee actions within the limits of Senate rules.
Committees use hearings to gather information for use in legislative, oversight and investigative activities, and to review the qualifications of presidential nominees. Regardless of the type of hearing, or whether a hearing is held in Washington or elsewhere, hearings share common aspects of planning and preparation. Senate standing committees and subcommittees are authorized to meet and to hold hearings when the Senate is in session, and when it has recessed or adjourned. To minimize conflicts with floor activities, a committee may not meet, without unanimous consent, on any day after the Senate has been in session for two hours, or after 2:00 p.m. when the Senate is in session.
Senate Rule XXVI requires each committee (except Appropriations and Budget) to give at least one week's notice of the date, place, and subject of a hearing; however, a committee may hold a hearing with less than one week's notice if it determines that there is "good cause." These notices appear in the Daily Digest section of the Congressional Record and online. While the Senate rule requires a one week public notice, a separate standing order of the Senate requires each Senate committee to notify the Daily Digest Office as soon as a hearing is scheduled [S.Res. 4, 95th Congress]. Hearings are generally open to the public, but can be closed by a committee roll-call vote in open session if the subject matter falls within specific categories enumerated in Senate rules.
Although a committee chair determines the agenda and selects witnesses, the minority typically works informally with the majority to invite witnesses representing its views. Senate rules allow the minority-party members of a committee (except Appropriations) to call witnesses of their choice on at least one day of a hearing. Witnesses before Senate committees generally must provide the committee with a copy of their written testimony at least one day before their oral testimony, with specifics set out in individual committee rules. It is common practice to request witnesses to limit their oral remarks to a brief summary of the written testimony.
A question-and-answer period generally follows a witness's testimony. Each committee determines the order in which senators question witnesses. Although Senate rules do not restrict the length of time each senator may question a witness, several committees have adopted such rules. Some committees also authorize committee staff to question witnesses.
A markup is a meeting of the committee to debate and consider amendments to a measure under consideration. The markup determines whether the measure pending before a committee will be recommended to the full Senate, and whether it should be amended in any substantive way.
Procedures in markup for the most part reflect procedures used on the Senate floor, possibly modified by an individual committee's rules. The process begins when the chair of the committee schedules and sets the agenda for the markup. In leading a markup, the chair has broad discretion choosing the legislative vehicle and presenting it for consideration and amendment. The measure that is marked up may be one that was introduced in the Senate, or received from the House and referred to the committee. Alternatively the chair may choose to consider the text of a draft measure that has not been introduced, such as a subcommittee-reported version or a chairman's mark. In still other cases, the markup vehicle may be placed before the committee as an "amendment in the nature of a substitute" for the measure or text initially referred to it.
When a committee concludes its markup, any committee member may move to order the measure reported to the Senate. A committee has several options for the form in which the measure is ordered reported. It may be reported with no changes, with amendments to various sections adopted in markup, or with one amendment in the nature of a substitute. In addition, a Senate committee is authorized to report an original bill that embodies a text decided upon in markup.
Senate rules require the physical presence of a majority of the committee in order to report a measure. Absent senators may vote by proxy on reporting a measure unless a committee has adopted a rule to the contrary, but such proxy votes may not affect the outcome of a vote to report a measure, and proxies may not be counted to determine a quorum.
Following a committee's vote to order a measure reported, it is the duty of the committee's chairman to report the measure promptly to the Senate. When a committee reports a measure, it generally prepares an accompanying written report that describes the purposes and provisions of the measure. If a report is submitted, Senate rules and statutes require the inclusion of such components as records of roll-call votes cast in committee, cost estimates, a statement of regulatory impact, and the specific changes the legislation would make to existing law. Committee members are also entitled to at least three days to prepare supplementary, minority, or additional views for inclusion in the report.
Senate committees publish a variety of documents dealing with legislative issues, investigations, and internal committee matters. Print copies of these publications are generally available from the issuing committees or the Senate document room. Increasingly, committee publications are available in electronic format, either on the committee's Web site or via GPO Access.
Printed hearings contain the edited transcript of testimony, but they are often not published for months after a hearing. Hearing transcripts are usually available for inspection in committee offices and are often posted online.
Committee Reports on Measures
A committee report accompanying legislation, described above, provides an explanation of a measure, and the committee's actions in considering it.
Committee calendars are comprehensive records of a committee's actions, including committee rules, membership, brief legislative histories of measures referred to the committee, lists of hearings and markups held, and often a list of committee publications. Calendars are published at the end of each Congress.
Finally, committees may also publish other information as "committee prints." A committee print might include committee rules or a report on a policy issue the committee wants to distribute widely, but in a form which is less formal than a committee report. A committee may also prepare a text which the Senate (by resolution) orders printed as a numbered Senate document.
Senate Floor Procedure
Senate Proceedings and Senators' Rights
Senate floor proceedings are governed not only by the Senate's standing rules and precedents, but by various customary practices. Generally, these practices expedite business, but require unanimous consent.
Senate rules and practices emphasize full deliberation more than expeditious decision, and rights of individual aenators more than the powers of the majority. Senators can protect their rights by objecting to unanimous consent requests to waive rules. Compromise and accommodation tend to prevail; senators most often insist on strict enforcement of rules in contentious situations.
Debate, Filibusters, and Cloture
The presiding officer of the Senate may not use the power to recognize senators to control the flow of business. If no senator holds the floor, any senator seeking recognition has a right to be recognized, and then, usually, to speak for as long as he or she wishes (but only twice a day on the same question). Once recognized, a senator can move to call up any measure or offer any amendment or motion that is in order. Senate rules do not permit a majority to end debate and vote on a pending question.
Generally, no debatable question can come to a vote if senators still wish to speak. Senators who oppose a pending bill or other matter may speak against it at indefinite length, or delay action by offering numerous amendments and motions. A filibuster involves using such tactics in the hope of convincing the Senate to alter a measure or withdraw it from consideration. The only bills that cannot be filibustered are those few considered under provisions of law that limit time for debating them.
The only procedure Senate rules provide for overcoming filibusters is cloture, which cannot be voted until two days after it is proposed in a petition signed by 16 senators. Cloture requires the support of three-fifths of senators (normally 60), except on proposals to change the rules, when cloture requires two-thirds of senators voting. If the Senate invokes cloture on a bill, amendment, or other matter, its further consideration is limited to 30 additional hours, including time consumed by votes and quorum calls, during which each senator may speak for no more than one hour.
Scheduling Legislative Business
Senate business includes legislative business (bills and resolutions) and executive business (nominations and treaties). (The Senate also sits as a court to try impeachments, for which a special, separate set of rules applies.) When introduced or received from the House or the president, legislative or executive business is normally referred to the committee with appropriate jurisdiction. Business is placed on the legislative or executive calendar, and becomes available for floor consideration, if the committee reports it.
The Senate accords its majority leader prime responsibility for scheduling. He may carry out this responsibility by moving that the Senate proceed to consider a particular matter. By precedent, he and the minority leader are recognized preferentially, and by custom only he (or his designee) makes motions or requests affecting when the Senate will meet and what it will consider.
For executive business, this motion to proceed may be offered in a nondebatable form, but for legislative business it usually is debatable. Whenever possible, therefore, the majority leader instead calls up bills and resolutions by unanimous consent. If senators object to unanimous consent to take up a measure, they are implicitly threatening to filibuster a motion to consider it. They may do so because they oppose that measure, or in the hope of influencing action on some other matter.
Senators can even place a "hold" on a measure or nomination, although this practice is not recognized in Senate rules. "Holds" are requests by senators to their party's floor leader to object on their behalf to any request to consider a matter, at least until they have been consulted. The majority leader will usually not even request consent to consider a measure if there is a hold on it.
Senate rules also permit a measure to be placed directly on the calendar when introduced or received from the House. This process permits senators to bypass referral to a committee they believe unsympathetic. Alternatively, if a committee fails to report a measure, a new measure with exactly the same provisions may be introduced and placed directly on the calendar.
Finally, Senate rules do not require that amendments be germane or relevant, except to general appropriation bills, budget measures, and matters under cloture (and a few other bills, pursuant to statutes). Consequently, if a committee fails to report a measure, a senator may offer its text as an amendment to any other measure under consideration, without regard to the scheduling preferences of the majority leader.
The Daily Order of Business
Each time the Senate convenes after an adjournment, a new legislative day begins. On each new legislative day, Senate rules provide for a "Morning Hour" during which routine "morning business" can occur, such as introducing bills and submitting committee reports. During this period, the Senate may also be able to take up bills on the calendar by nondebatable motions.
In practice, the Senate often recesses at the end of the day, rather than adjourning. Party leaders sometimes prefer a recess because it gives them greater flexibility in shaping the Senate's daily business. Since there is then no Morning Hour when the Senate next convenes, the majority leader usually obtains unanimous consent for "a period for routine morning business," such as bill introductions. Senators often make brief speeches during this period.
After the Morning Hour or the period for routine morning business, the Senate normally resumes consideration of the business previously before it. This business may be set aside, temporarily or indefinitely, in favor of other business through motions or unanimous consent requests by the majority leader. At any point in the day, noncontroversial business also may be conducted by unanimous consent.
Unanimous Consent Agreements
Senators' rights to debate and to offer nongermane amendments encourage the leaders to seek unanimous consent agreements that limit the exercise of these rights during consideration of a specified matter. If any senator objects, the Senate cannot impose such an agreement, but once it is accepted, the Senate may later change its terms only by unanimous consent.
Unanimous consent agreements limiting the time for debate on a measure are frequently called "time agreements." Time agreements impose stated limits on debate of questions that may arise during consideration of a measure, and often on the legislation itself. These agreements place the time provided under the control of managers. Other senators then may speak only if a manager yields them part of the time he or she controls.
Unanimous consent agreements also may require that amendments to a measure be germane, or, alternatively, relevant to it. Relevancy is a somewhat less restrictive standard than germaneness. An agreement may prohibit all amendments to a measure except those it specifically identifies.
Responsibility for negotiating time agreements falls primarily on the party floor leaders and the leaders of the reporting committee. Individual senators advise the leaders of their preferences and intentions, and time agreements may include exceptions to their general provisions in order to satisfy these preferences.
The Senate begins consideration of most measures without first having reached a time agreement. For some measures, few amendments and little debate are expected, making an agreement unnecessary. For others, consideration may proceed while the floor leaders and managers try to arrange unanimous consent agreements for limited purposes. Before consideration of a controversial amendment, for example, leaders may propose to limit debate on it. If extended consideration occurs, the leaders often seek an overall agreement limiting debate on each remaining amendment, or setting a time for a vote on final passage.
The Amending Process
Floor consideration of a measure usually begins with opening statements by the floor managers, and often by other senators. The managers usually are the chair and ranking minority member of the reporting committee or pertinent subcommittee.
The first amendments to be considered are those recommended by the reporting committee. If the committee has proposed many amendments, the manager often obtains unanimous consent that these amendments be adopted, but that all provisions of the measure as amended remain open to further amendment. After committee amendments are disposed of, amendments may be offered to any part of the measure in any order. If the committee recommends a substitute for the full text of the measure, the substitute normally remains open to amendment throughout its consideration.
The Senate may dispose of each amendment either by voting on it directly or by voting to table it. The motion to table cannot be debated; and, if the Senate agrees to it, the effect is the same as a vote to defeat the amendment. If the Senate defeats the motion, however, debate on the amendment may resume.
While an amendment is pending, senators may propose amendments to it (called second-degree amendments) and to the part of the measure the amendment would change. The Senate votes on each of these amendments before it votes on the first-degree amendment (the amendment to the measure). Many additional complications exist. When a complete substitute for a measure is pending, for example, senators can propose six or more first- and second-degree amendments to the substitute and the measure before any votes must occur.
If an amendment is considered under a time limitation, senators may make no motions or points of order, or propose other amendments, until all the time for debating the amendment has been used or yielded back. Sometimes, however, the Senate unanimously consents to lay aside pending amendments temporarily in order to consider another amendment to the measure.
The amending process continues until the Senate orders the bill engrossed and read a third time, which precludes further amendment. Then the Senate votes on final passage.
Voting and Quorum Calls
The Constitution requires a majority of senators to be present for the Senate to conduct business. If a senator suggests the absence of a quorum, and a majority of senators do not respond to their names, the Senate can only adjourn, recess, or attempt to secure the attendance of additional senators. However, the purpose of a quorum call usually is to suspend floor activity temporarily to accommodate individual senators, discuss procedural or policy problems, or arrange subsequent proceedings. As a result, quorum calls usually are ended by unanimous consent before the clerk completes a call of the roll.
Article. I, sec. 5, paragraph 3 of the Constitution provides that one-fifth of those present (11 senators, if no more than a quorum is present) can order the yeas and nays — also known as a rollcall vote or a recorded vote. If a senator asks for the yeas and nays on a pending question, and the Senate orders them, it does not mean that a vote will occur immediately. Instead, ordering the yeas and nays means that whenever the vote does occur, it will be by roll call and will be recorded in the Journal. Otherwise, votes usually are taken by voice vote.
Resolving Differences with the House
A bill cannot become a law of the land until it has been approved in identical form by both houses of Congress. Once the Senate amends and agrees to a bill that the House already has passed—or the House amends and passes a Senate bill—the two houses may begin to resolve their legislative differences by way of a conference committee or through an exchange of amendments between the houses.
If the Senate does not accept the House’s position (or the House does not agree to the Senate’s position), one of the chambers may propose creation of a conference committee to negotiate and resolve the matters in disagreement between the two chambers. Typically, the Senate gets to conference with the House by adopting this standard motion: "Mr. President, I move that the Senate insist on its amendments (or "disagree to the House amendments" to the Senate-passed measure), request a conference with the House on the disagreeing votes thereon, and that the Chair be authorized to appoint conferees." This triple motion rolled into one–to insist (or disagree), request, and appoint–is commonly agreed to by unanimous consent. The presiding officer formally appoints the Senate’s conferees. (The Speaker names the House conferees.) Conferees are traditionally drawn from the committee of jurisdiction, but conferees representing other Senate interests may also be appointed.
There are no formal rules that outline how conference meetings are to be organized. Routinely, the principals from each chamber or their respective staffs conduct pre-conference meetings so as to expedite the bargaining process when the conference formally convenes. Informal practice also determines who will be the overall conference chair (each chamber has its own leader in conference). Rotation of the chairship between the chambers is usually the practice when matched pairs of panels (the tax or appropriations panels, for example) convene in conference regularly. For standing committees that seldom meet in conference, the choice of who will chair the conference is generally resolved by the conference leaders from each chamber. The decision on when and where to meet and for how long are a few prerogatives of the chair, who consults on these matters with his or her counterpart from the other body.
Once the two chambers go to conference, the respective House and Senate conferees bargain and negotiate to resolve the matters in bicameral disagreement. Resolution is embodied in a conference report, signed by a majority of Senate conferees and House conferees. The conference report must be agreed to by both chambers before it is cleared for presidential consideration. In the Senate, conference reports are usually brought up by unanimous consent at a time agreed to by the party leaders and floor managers. Because conference reports are privileged, if any senator objects to the unanimous consent request, a nondebatable motion can be made to take up the conference report. Approval of the conference report itself is subject to extended debate, but conference reports are not open to amendment.
Almost all of the most important measures are sent to conference, but these are only a minority of the bills that the two houses pass each year.
Exchange of Amendments between the Houses
Differences between versions of most noncontroversial bills and some major bills that must be passed quickly are reconciled through the exchange of amendments between the houses. The two chambers may send measures back and forth, amending each other’s amendments until they agree to identical language on all provisions of the legislation. Generally, the provisions of an amendment between the houses are the subject of informal negotiations, so extended exchanges of amendments are rare. But there is also a parliamentary limit on the number of times a measure may shuttle between the chambers. In general, each chamber has only two opportunities to amend the amendments of the other body because both chambers prohibit third-degree amendments. In rare instances, however, the two chambers waive or disregard the parliamentary limit and exchange amendments more than twice. The current record is nine exchanges.
At any stage of this process a chamber may accept the position of the other body, insist on its most recent position, request a conference to resolve the remaining differences, or refuse to take further action and allow the measure to die.
The Senate normally takes action on an amendment of the House only when there is an expectation that the amendment may be disposed of readily, typically by unanimous consent. In the absence of such an expectation, the Senate will generally proceed to conference in order to negotiate a resolution to any serious disagreements within the Senate or with the House rather than attempt to resolve them on the floor.
The Senate and House must resolve all their disagreements concerning a bill or joint resolution before it can be "enrolled" and presented to the president for his approval or veto. When the measure has finally been approved by both houses, all the original papers are transmitted to the enrolling clerk of the originating chamber.
Enrollment and Presidential Action
Enrollment and Presentation
After the Senate and House resolve all their disagreements concerning a bill or joint resolution, all the original papers are transmitted to the enrolling clerk of the originating chamber, who has the measure printed on parchment, certified by the chief officer of the originating chamber, and signed by the Speaker of the House and by either the vice president (who is the president of the Senate) or the authorized presiding officer of the Senate. The enrolled bill then goes to the president for his approval or veto.
Measures are not always presented immediately to the president. A variety of factors can produce delays. When the president has been out of the country for long periods of time, for example, the White House and congressional leaders have agreed that enrolled measures will be presented to the president upon his return; at other times, measures have been sent to the president overseas. In other instances, congressional leaders present measures so as to give time for organizing public signing ceremonies or so the signing to take place on a particular day. In still other instances, depending on whether the president is expected to sign or veto a measure, congressional leaders time the presentation to avoid or to bring political pressure to bear on the president.
Pursuant to Article 1, section 7 of the Constitution, "Every Bill, which shall have passed the House of Representatives and the Senate, shall, before it become a Law, be presented to the President of the United States; . . . ." If the president approves and signs the measure within 10 days, it becomes law. The 10-day period begins on midnight of the day the president receives the measure, and Sundays are not counted. Thus, if the president were to receive an enrolled measure on Thursday, February 14th, the first day of the 10-day period would be Friday, February 15th; the last day would be Tuesday, February 26th.
If the president objects to a measure, he may veto it by returning it to its chamber of origin together with a statement of his objections, again within the same 10-day period. Unless both chambers subsequently vote by a 2/3 majority to override the veto, the measure does not become law.
If the president does not act on a measure—approving or vetoing it—within 10 days, the fate of the measure depends on whether Congress is in session. If Congress is in session, the bill becomes law without the president's approval. If Congress is not in session, the measure does not become law. Presidential inaction when Congress is not in session is known as a pocket veto. Congress has interpreted the use of the pocket veto to be limited to the final, so-called sine die adjournment of the originating chamber. The president's pocket veto authority is not definitively decided. | http://www.senate.gov/legislative/common/briefing/Senate_legislative_process.htm | 13 |
35 | How To Think Logically
INDUCTIVE REASONING: When you reason inductively, you begin with a number of instances (facts or observations) and use them to draw a general conclusion. Whenever you interpret evidence, you reason inductively. The use of probability to form a generalization is called an inductive leap. Inductive arguments, rather than producing certainty, are thus intended to produce probable and believable conclusions. As your evidence mounts, your reader draws the conclusion that you intend. You must make sure that the amount of evidence is sufficient and not based on exceptional or biased sampling. Be sure that you have not ignored information that invalidates your conclusion (called the “neglected aspect”) or presented only evidence that supports a predetermined conclusion (known as “slanting”).
DEDUCTIVE REASONING: When you reason deductively, you begin with generalizations (premises) and apply them to a specific instance to draw a conclusion about that instance. Deductive reasoning often utilizes the syllogism, a line of thought consisting of a major premise, a minor premise and a conclusion; for example, All men are foolish (major premise); Smith is a man (minor premise); therefore, Smith is foolish (conclusion). Of course, your reader must accept the ideas or values that you choose as premises in order to accept the conclusion. Sometimes premises are not stated. A syllogism with an unstated major or minor premise, or even an unstated conclusion, needs to be examined with care because the omitted statement may contain an inaccurate generalization.
THE TOULMIN METHOD: Another way of viewing the process of logical thinking is through the Toulmin method. This model is less constrained than the syllogism and makes allowances for the important elements of probability, backing, or proof for the premise and rebuttal of the reader’s objections. This approach sees arguments as the progression from accepted facts or evidence (data) to a conclusion (claim) by way of a statement (warrant) that establishes a reasonable relationship between the two. The warrant is often implied in arguments, and like the unstated premise in the syllogism, needs careful examination to be acceptable. The writer can allow for exceptions to a major premise. Qualifiers such as probably, possibly, doubtless, and surely show the degree of certainty of the conclusion; rebuttal terms such as unless allow the writer to anticipate objections.
FALLACIES: A deductive argument must be both valid and true. A true argument is based on generally accepted, well-backed premises. Learn to distinguish between fact (based on verifiable data) and opinion (based on personal preferences). A valid argument follows a reasonable line of thinking.
Fallacies are faults in premises (truth) or in reasoning (validity). They may result from misusing or misrepresenting evidence, from relying on faulty premises or omitting a needed premise, or from distorting the issues. The following are some of the major forms of fallacies:
Non Sequitur: A statement that does not follow logically from what has just been said; in other words, a conclusion that does not follow from the premises.
Hasty Generalization: A generalization based on too little evidence or on exceptional or biased evidence.
Ad Hominem: Attacking the person who presents an issue rather than dealing logically with the issue itself.
Bandwagon: An argument saying, in effect, "Everyone's doing or saying or thinking this, so you should too."
Red Herring: Dodging the real issue by drawing attention to an irrelevant issue.
Either...Or: Stating that only two alternatives exist when in fact there are more than two.
False Analogy: The assumption that because two things are alike in some ways, they must be in other ways.
Equivocation: An assertion that falsely relies on the use of a term in two different senses.
Slippery Slope: The assumption that if one thing is allowed, it will be the first step in a downward spiral.
Oversimplification: A statement or argument that leaves out relevant considerations about an issue.
Begging the Question: An assertion that restates the point just made. Such an assertion is circular in that it draws as a conclusion a point stated in the premise.
False Cause: The assumption that because one event follows another, the first is the cause of the second. Sometimes called post hoc, ergo propter hoc ("after this, so because of this"). | http://www.trinitysem.edu/Student/LessonInstruction/ThinkLogically.html | 13 |
22 | By Nathan Fox
Yesterday, I offered a definition of the word “assumption” using a very simplistic mathematical example. Today, I’m going to dig a bit deeper into the Assumption category by using another super-simple bit of math. Don’t panic! If you passed third grade, you’ve seen this math before.
Many students struggle with assumption questions because they don’t understand that there are two very different types of assumptions. The purpose of this post is to start teaching you the difference.
The previous post offered an example of an assumption that was both sufficient and necessary. Today I am going to talk about just one of those types, the “Sufficient Assumption.” So consider the following argument:
Premise: Anything times zero equals zero. Conclusion: Therefore A times B equals zero.
Question: “Which one of the following, if true, would allow the conclusion to be properly inferred?” Or, stated another way, “Which one of the following, if assumed, would justify the argument’s conclusion?” Both of these are asking for sufficient assumptions. (You might want to memorize the wording of those questions, so that you can differentiate a sufficient assumption question from a necessary assumption question.)
This question is asking you to prove the argument’s conclusion. In order to prove a conclusion on the LSAT, the conclusion of the argument must be connected, with no gaps, to the evidence offered. So we need an answer that connects the evidence “anything times zero is zero” to the conclusion “A times B is zero.”
It’s pretty simple. The answer must contain one of the following:
“A equals zero.” If it’s true that A is zero, and if it’s true that anything times zero equals zero, then no matter what B is, the conclusion “A times B equals zero” would be proven correct. And proof is what we’re looking for on a sufficient assumption question.
“B equals 0″ would be just as good, because no matter what A is, the conclusion “A times B equals zero” would be proven correct.
Here’s the really interesting part (if you’re a nerd like me, which I hope you are). While “A equals zero” and “B equals zero” are each sufficient to prove the conclusion correct, neither of these statements, independently, are necessary in order for the argument to possibly make sense. A could be 1,000,000, and the conclusion “A times B equals zero” could still be conceivable (if B equals zero). Likewise, B could be 1,000,000 and the conclusion could still be possible (as long as A equals zero.) So if the question had said “which one of the following is an assumption required by the argument,” (that’s asking for a necessary component of the argument) then “A equals zero” would not be a good answer. Nor would “B equals zero.”
The definition of “sufficient assumption” is “something that would prove the argument’s conclusion to be correct.” Go ahead and memorize that. I’ll be back soon to offer a definition of “necessary assumption.” Once I’m done with that, I promise I won’t use any math for a while. | http://www.foxtestprep.com/lsat-blog/2011/12/05/what-sufficient-assumption-means/ | 13 |
30 | ||It has been suggested that Socratic questioning be merged into this article. (Discuss) Proposed since March 2012.|
|Part of a series on|
|"I know that I know nothing"
Social gadfly · Trial of Socrates
|Socratic dialogue · Socratic method
Socratic questioning · Socratic paradox
|Plato · Xenophon
Antisthenes · Aristippus
|Megarians · Cynicism · Cyrenaics · Platonism
Stoicism · The Clouds
|Part of a series on|
Plato from The School of Athens by Raphael, 1509
|Allegories and metaphors|
The Socratic method (also known as method of elenchus, elenctic method, Socratic irony, or Socratic debate), named after the classical Greek philosopher Socrates, is a form of inquiry and debate between individuals with opposing viewpoints based on asking and answering questions to stimulate critical thinking and to illuminate ideas. It is a dialectical method, often involving an oppositional discussion in which the defense of one point of view is pitted against the defense of another; one participant may lead another to contradict himself in some way, thus strengthening the inquirer's own point.
The Socratic method is a negative method of hypothesis elimination, in that better hypotheses are found by steadily identifying and eliminating those that lead to contradictions. The Socratic method searches for general, commonly held truths that shape opinion, and scrutinizes them to determine their consistency with other beliefs. The basic form is a series of questions formulated as tests of logic and fact intended to help a person or group discover their beliefs about some topic, exploring the definitions or logoi (singular logos), seeking to characterize the general characteristics shared by various particular instances. The extent to which this method is employed to bring out definitions implicit in the interlocutors' beliefs, or to help them further their understanding, is called the method of maieutics. Aristotle attributed to Socrates the discovery of the method of definition and induction, which he regarded as the essence of the scientific method.
In the second half of the 5th century BC, sophists were teachers who specialized in using the tools of philosophy and rhetoric to entertain or impress or persuade an audience to accept the speaker's point of view. Socrates promoted an alternative method of teaching which came to be called the Socratic method. Socrates began to engage in such discussions with his fellow Athenians after his friend from youth, Chaerephon, visited the Oracle of Delphi, which confirmed that no man in Greece was wiser than Socrates. Socrates saw this as a paradox, and began using the Socratic method to answer his conundrum. Diogenes Laërtius, however, wrote that Protagoras invented the “Socratic” method.
Plato famously formalized the Socratic elenctic style in prose—presenting Socrates as the curious questioner of some prominent Athenian interlocutor—in some of his early dialogues, such as Euthyphro and Ion, and the method is most commonly found within the so-called "Socratic dialogues", which generally portray Socrates engaging in the method and questioning his fellow citizens about moral and epistemological issues.
The phrase Socratic questioning is used to describe a kind of questioning in which an original question is responded to as though it were an answer. This in turn forces the first questioner to reformulate a new question in light of the progress of the discourse.
Elenchus (Ancient Greek: ἔλεγχος elengkhos "argument of disproof or refutation; cross-examining, testing, scrutiny esp. for purposes of refutation") is the central technique of the Socratic method. The Latin form elenchus (plural elenchi ) is used in English as the technical philosophical term.
In Plato's early dialogues, the elenchus is the technique Socrates uses to investigate, for example, the nature or definition of ethical concepts such as justice or virtue. According to one general characterization, it has the following steps:
- Socrates' interlocutor asserts a thesis, for example "Courage is endurance of the soul", which Socrates considers false and targets for refutation.
- Socrates secures his interlocutor's agreement to further premises, for example "Courage is a fine thing" and "Ignorant endurance is not a fine thing".
- Socrates then argues, and the interlocutor agrees, that these further premises imply the contrary of the original thesis, in this case it leads to: "courage is not endurance of the soul".
- Socrates then claims that he has shown that his interlocutor's thesis is false and that its negation is true.
One elenctic examination can lead to a new, more refined, examination of the concept being considered, in this case it invites an examination of the claim: "Courage is wise endurance of the soul". Most Socratic inquiries consist of a series of elenchi and typically end in aporia.
Frede insists that step #4 above makes nonsense of the aporetic nature of the early dialogues. If any claim has been shown to be true then it cannot be the case that the interlocutors are in aporia, a state where they no longer know what to say about the subject under discussion.
The exact nature of the elenchus is subject to a great deal of debate, in particular concerning whether it is a positive method, leading to knowledge, or a negative method used solely to refute false claims to knowledge.
W. K. C. Guthrie in The Greek Philosophers sees it as an error to regard the Socratic method as a means by which one seeks the answer to a problem, or knowledge. Guthrie claims that the Socratic method actually aims to demonstrate one's ignorance. Socrates, unlike the Sophists, did believe that knowledge was possible, but believed that the first step to knowledge was recognition of one's ignorance. Guthrie writes, "[Socrates] was accustomed to say that he did not himself know anything, and that the only way in which he was wiser than other men was that he was conscious of his own ignorance, while they were not. The essence of the Socratic method is to convince the interlocutor that whereas he thought he knew something, in fact he does not."
Socrates generally applied his method of examination to concepts that seem to lack any concrete definition; e.g., the key moral concepts at the time, the virtues of piety, wisdom, temperance, courage, and justice. Such an examination challenged the implicit moral beliefs of the interlocutors, bringing out inadequacies and inconsistencies in their beliefs, and usually resulting in puzzlement known as aporia. In view of such inadequacies, Socrates himself professed his ignorance, but others still claimed to have knowledge. Socrates believed that his awareness of his ignorance made him wiser than those who, though ignorant, still claimed knowledge. While this belief seems paradoxical at first glance, it in fact allowed Socrates to discover his own errors where others might assume they were correct. This claim was known by the anecdote of the Delphic oracular pronouncement that Socrates was the wisest of all men. (Or, rather, that no man was wiser than Socrates.)
Socrates used this claim of wisdom as the basis of his moral exhortation. Accordingly, he claimed that the chief goodness consists in the caring of the soul concerned with moral truth and moral understanding, that "wealth does not bring goodness, but goodness brings wealth and every other blessing, both to the individual and to the state", and that "life without examination [dialogue] is not worth living". It is with this in mind that the Socratic method is employed.
The motive for the modern usage of this method and Socrates' use are not necessarily equivalent. Socrates rarely used the method to actually develop consistent theories, instead using myth to explain them. The Parmenides shows Parmenides using the Socratic method to point out the flaws in the Platonic theory of the Forms, as presented by Socrates; it is not the only dialogue in which theories normally expounded by Plato/Socrates are broken down through dialectic. Instead of arriving at answers, the method was used to break down the theories we hold, to go "beyond" the axioms and postulates we take for granted. Therefore, myth and the Socratic method are not meant by Plato to be incompatible; they have different purposes, and are often described as the "left hand" and "right hand" paths to the good and wisdom.
- A Socratic Circle (also known as a Socratic Seminar)
- is a pedagogical approach based on the Socratic method and uses a dialogic approach to understand information in a text. Its systematic procedure is used to examine a text through questions and answers founded on the beliefs that all new knowledge is connected to prior knowledge, that all thinking comes from asking questions, and that asking one question should lead to asking further questions. A Socratic Circle is not a debate. The goal of this activity is to have participants work together to construct meaning and arrive at an answer, not for one student or one group to “win the argument”.
This approach is based on the belief that participants seek and gain deeper understanding of concepts in the text through thoughtful dialogue rather than memorizing information that has been provided for them. While Socratic Circles can differ in structure, and even in name, they typically involve the following components: a passage of text that students must read beforehand and two concentric circles of students: an outer circle and an inner circle. The inner circle focuses on exploring and analysing the text through the act of questioning and answering. During this phase, the outer circle remains silent. Students in the outer circle are much like scientific observers watching and listening to the conversation of the inner circle. When the text has been fully discussed and the inner circle is finished talking, the outer circle provides feedback on the dialogue that took place. This process alternates with the inner circle students going to the outer circle for the next meeting and vice versa. The length of this process varies depending on the text used for the discussion. The teacher may decide to alternate groups within one meeting, or they may alternate at each separate meeting.
The most significant difference between this activity and most typical classroom activities involves the role of the teacher. In Socratic Circles the students lead the discussion and questioning. The teacher's role is to ensure the discussion advances regardless of the particular direction the discussion takes.
Various approaches to Socratic Circles
Teachers use Socratic Circles in different ways. The structure it takes may look different in each classroom. While this is not an exhaustive list, teachers may use one of the following structures to administer Socratic Seminar:
- Inner/Outer Circle or Fishbowl: Students need to be arranged in inner and outer circles. The inner circle engages in discussion about the text. The outer circle observes the inner circle, while taking notes. The outer circle shares their observations and questions the inner circle with guidance from the teacher/facilitator. Students use constructive criticism as opposed to making judgements. The students on the outside keep track of topics they would like to discuss as part of the debrief. Participants of the outer circle can use an observation checklist or notes form to monitor the participants in the inner circle. These tools will provide structure for listening and give the outside members specific details to discuss later in the seminar. The teacher may also sit in the circle but at the same height as the students.
- Triad: Students are arranged so that each participant in the inner circle (called a “pilot”) has two “co-pilots” sitting behind him/her on either side. Pilots are the speakers because they are in the inner circle; co-pilots are in the outer circle and only speak during consultation. The seminar proceeds as any other seminar. At a point in the seminar, the facilitator pauses the discussion and instructs the triad to talk to each other. Conversation will be about topics that need more in-depth discussion or a question posed by the leader. Sometimes triads will be asked by the facilitator to come up with a new question. Any time during a triad conversation, group members can switch seats and one of the co-pilots can sit in the pilot’s seat. Only during that time is the switching of seats allowed. This structure allows for students to speak, who may not yet have the confidence to speak in the large group. This type of seminar involves all students instead of just the students in the inner and outer circles.
- Simultaneous Seminars: Students are arranged in multiple small groups and placed as far as possible from each other. Following the guidelines of the Socratic Seminar, students engage in small group discussions. Simultaneous seminars are typically done with experienced students who need little guidance and can engage in a discussion without assistance from a teacher/facilitator. According to the literature, this type of seminar is beneficial for teachers who want students to explore a variety of texts around a main issue or topic. Each small group may have a different text to read/view and discuss. A larger Socratic Seminar can then occur as a discussion about how each text corresponds with one another. Simultaneous Seminars can also be used for a particularly difficult text. Students can work through different issues and key passages from the text.
No matter what structure the teacher employs, the basic premise of the seminar/circles is to turn partial control and direction of the classroom over to the students. The seminars encourage students to work together, creating meaning from the text and to stay away from trying to find a correct interpretation. The emphasis is on critical and creative thinking.
- Socratic Circle texts
- A Socratic Circle text is a tangible (part or whole) document that creates a thought-provoking discussion.
These texts ought to be appropriate for the participants' current level of intellectual and social development. The text provides the anchor for dialogue whereby the facilitator can bring the participants back to the text if they begin to digress. Furthermore, the seminar text enables the participants to create a level playing field – ensuring that the dialogical tone within the classroom remains consistent and pure to the subject or topic at hand. Some practitioners argue that "texts" do not have to be confined to printed texts, but can include artifacts such as objects, physical spaces, and the like.
Pertinent elements of an effective Socratic text
Effective Socratic seminar texts are able to challenge participants’ thinking skills if the texts are selected for the participants based on a number of characteristics:
- Ideas and values
- Complexity and challenge
- Relevance to participants curriculum
1. Ideas and values - According to proponents of Socratic Circles, an effective text contains ideas and values that are complex and difficult to summarize. Powerful discussions are claimed to arise from personal connections to an abstract idea (thought, mental image or notion) or from a personal value (an idea that is desirable or worthy for its own sake).
2. Complexity and challenge - Within the literature on Socratic Circles pedagogy, effective texts are to be rich in ideas, complexity and open to interpretation. Ideally, it is argued, these texts should require multiple readings. At the same time, proponents also argue that texts should not be well above the participants intellectual level or too long to read.
3. Relevance to participants and curriculum - An effective text has identifiable themes or issues that are recognizable and pertinent to the participants and their lives. Keeping in mind at all times the significant themes, ideas and values in the text should always align with the intended curriculum.
4. Ambiguity - The literature states that good texts provoke critical thinking and raise important questions if the Socratic text is able to be legitimately considered and discussed from a variety of different perspectives, including perspectives that seem mutually exclusive. This promotes a lot of discussions, explanations and individual contributions since there is no right or wrong answer.
Two different ways to select a text
According to the literature, a good seminar consists of a strong effective text. Along with the four elements listed above, a good Socratic text is not limited to written documents. Instead, there are a variety of Socratic texts to choose from and can be divided into two main categories: (1) Print texts (e.g. short stories, poems, and essays) and non-print texts (e.g. photographs, sculptures, and maps) or (2) Subject area which can draw from print or non-print artifacts. For example: Language Arts (e.g. poems), History (written or oral historical speeches), Science (e.g. policies on environmental issues), Math (e.g. Mathematical proofs), Health (e.g. nutrition labels), and Physical Education (e.g. code of ethics).
Questioning methods in Socratic Circles
Socratic Circles are based upon student involvement and participation through the interaction of peers. The focus is to gain multiple perspectives on a given issue or topic. This enables students to understand different points of view and to thoroughly explore the topic. Socratic questioning is used to generate and help maintain the flow of the activity to help students connect the activity to their learning. The question pedagogy of Socratic Questions is open-ended; focusing on broad, general ideas rather than specific, factual information. The questioning technique used generally emphasize a higher-level of questioning and thinking that have no single right answer that encourages discussion among the Socratic circle.
Socratic circles generally start with an open-ended question either by the leader or asked from participants to encourage critical thinking and to develop skills. There is no designated first speaker, however, as individuals participate in Socratic circles, the more experience they will gain. There are two distinctive roles for this method: a leader and the participant.
The leader keeps the topic focused by asking a variety of questions about the text, follow-up questions and questions to help clarify positions when arguments become confused and/or involve reluctant participants while restraining others who are actively involved. Their questions are to prompt and/or encourage participants to elaborate on their responses to build on what others state. The questions are to help probe participants to deepen, clarify, paraphrase and to synthesize a variety of different views.
The participants share the responsibility with the leader to maintain the quality of the Socratic circle. They are to listen actively to enable themselves to share their ideas and questions in response to those presented by others. This enables the participants to gain experience in thinking and speaking persuasively using the discussion to support their position. During the questioning and responding, participants are to demonstrate respect for different ideas, thoughts and values with no interruptions. Individuals are required to build on what was previously mentioned.
Questions can be created individually or in small groups. All participants must be given the opportunity to take part in answering the questions. This strategy will include roles for everyone, such as probing and expanding. These questions will help students predict the content of the material that is to be explored. There are three types of questions to prepare for Socratic Circles: opening, guiding and closing questions.
- Opening questions – used to generate further discussion at the beginning of the seminar which elicits the main idea.
- Guiding questions – to help deepen and elaborate the discussion and responses. Participants are to maintain on track and encourage a positive atmosphere and consideration for others.
- Closing questions – used to bring the seminar to a close. Participants are to summarize their thoughts and learning and personalize what they’ve discussed, making reference to their personal life.
The Socratic method is widely used in contemporary legal education by most law schools in the United States. In a typical class setting, the professor asks a question and calls on a student who may or may not have volunteered an answer. The professor either then continues to ask the student questions or moves on to another student.
The employment of the Socratic method has some uniform features but can also be heavily influenced by the temperament of the teacher. The method begins by calling on a student at random, and asking about a central argument put forth by one of the judges (typically on the side of the majority) in an assigned case. The first step is to ask the student to paraphrase the argument to ensure they read and basically understand the case. (Students who have not read the case, for whatever reason, must take the opportunity to "pass," which most professors allow as a matter of course a few times per term.) Assuming the student has read the case and can articulate the court's argument, the professor then asks whether the student agrees with the argument. The professor then typically plays Devil's advocate, trying to force the student to defend his or her position by rebutting arguments against it.
These subsequent questions can take several forms. Sometimes they seek to challenge the assumptions upon which the student based the previous answer until it can no longer be defended. Further questions can be designed to move a student toward greater specificity, either in understanding a rule of law or a particular case. The teacher may attempt to propose a hypothetical situation in which the student's assertion would seem to demand an exception. Finally professors can use the Socratic method to allow students to come to legal principles on their own through carefully worded questions that encourage a particular train of thought.
One hallmark of Socratic questioning is that typically there is more than one "correct" answer, and more often, no clear answer at all. The primary goal of the Socratic method in the law school setting is not to answer usually unanswerable questions, but to explore the contours of often difficult legal issues and to teach students the critical thinking skills they will need as lawyers. This is often done by altering the facts of a particular case to tease out how the result might be different. This method encourages students to go beyond memorizing the facts of a case and instead to focus on application of legal rules to tangible fact patterns. As the assigned texts are typically case law, the Socratic method, if properly used, can display that judges' decisions are usually conscientiously made but are based on certain premises, beliefs, and conclusions that are the subject of legitimate argument.
Sometimes, the class ends with a discussion of doctrinal foundations (legal rules) to anchor the students in contemporary legal understanding of an issue. At other times the class ends without such discussion leaving students to figure out for themselves the legal rules or principles that were at issue. For this method to work, the students are expected to be prepared for class in advance by reading the assigned materials (case opinions, notes, law review articles, etc.) and by familiarizing themselves with the general outlines of the subject matter.
Several excellent examples of the Socratic Method are portrayed in the 1973 film The Paper Chase, based on a 1970 novel by John J. Osborn, Jr., also titled The Paper Chase. Several scenes involve the interaction of members of Professor Kingsfields's first year Contracts Law course and clearly show how the Socratic method is used as a framework for presenting concepts in contract law to the students.
The Socratic method, in the form of Socratic questioning, has been adapted for psychotherapy, most prominently in Classical Adlerian Psychotherapy, Cognitive Therapy and Reality Therapy. It can be used to clarify meaning, feeling, and consequences, as well as to gradually unfold insight, or explore alternative actions.
Human resource training and development
The method is used by modern management training companies facilitating skills, knowledge and attitudinal change; e.g. Acta Non Verba, Krauthammer, Gustav Käser Training International, Odyssey Ltd, Dynargie, Wendell Nekoranec.
The principal trainer acts as a facilitator who uses a high percentage of open questions to allow the participants to reflect critically on their own way of thinking, feeling, or behaving in a given context - usually involving a problem or desired outcome - and guiding participants to form the conclusion or an axiom/principle/belief through their own efforts, potentially highlighting dissonance, conflicts of thought and actions with questions for further discussion.
The generalized form may then be elaborated with more specific detail through an example, e.g. a case study led by the trainer.
Lesson plan elements for teachers in classrooms
This is a classical method of teaching that was designed to create autonomous thinkers.
- Plan and build the main course of thought through the lessons.
- Build in potential fallacies (errors) for discovery and discussion.
- Know common fallacies.
- It may help to start or check with the conclusion and work backwards.
Methodology in operation
- The teacher and student agree on the topic of instruction.
- The student agrees to attempt to answer questions from the teacher.
- The teacher and student are willing to accept any correctly-reasoned answer. That is, the reasoning process must be considered more important than pre-conceived facts or beliefs.
- The teacher's questions should expose errors in the students' reasoning or beliefs, then formulate questions that the students cannot answer except by a correct reasoning process. The teacher has prior knowledge about the classical fallacies (errors) in reasoning.
- Where the teacher makes an error of logic or fact, it is acceptable for a student to draw attention to the error.
An informal discussion or similar vehicle of communication may not strictly be a (Socratic) dialogue. Therefore it is only suitable as a medium for the Socratic method where the principles are known by teachers and likely to be known by students. Additionally, the teacher is knowledgeable and proficient enough to spontaneously ask questions in order to draw conclusions and principles etc. from the students.
A number of American universities and boarding schools employ a system known as the Harkness method, a style of teaching directly derived from the Socratic method. Developed in the early 1930s at Phillips Exeter Academy under the patronage of philanthropist Edward Harkness, the system calls for an oval table around which approximately thirteen students and an instructor sit. The instructor does not directly lecture the class but rather poses thought-provoking questions or topics, which the group then discusses. Meant to encourage shy students to voice their opinions as well as to foster the development of critical thinking skills, Harkness is widespread at many American schools.
- Active learning
- Institutional memory
- Marva Collins
- Rote learning
- Socrates Cafe
- W. Clement Stone
- Jarratt, Susan C. Rereading the Sophists: Classical Rhetoric Refigured. Carbondale and Edwardsville: Southern Illinois University Press, 1991., p 83.
- Sprague, Rosamond Kent, The Older Sophists, Hackett Publishing Company (ISBN 0-87220-556-8), p. 5.
- Liddell, Scott and Jones, Greek-English Lexicon, 9th Edition.
- Webster's New World College Dictionary, 4th Edition; Oxford English Dictionary.
- Gregory Vlastos, 'The Socratic Elenchus', Oxford Studies in Ancient Philosophy I, Oxford 1983, 27–58.
- Michael Frede, "Plato's Arguments and the Dialogue Form", Oxford Studies in Ancient Philosophy, Supplementary Volume 1992, Oxford 1992, 201–19.
- Copeland, Matt (2010). Socratic Circles: Fostering Critical and Creative Thinking in Middle and High School. Portland, MN: Stenhouse.
- "The Socratic Circle". Retrieved 17 July 2012.
- "Furman: Socratic Seminar". Retrieved July 2012.
- Ting Chowning, Jeanne (October 2009). "Socratic Seminars in Science Class". The Science Teacher (National Science Teachers Association) 76 (7): 38.
- Gose, Michael (January 2009). "When Socratic Dialogue is Flagging: Questions and Strategies for Engaging Students". College Teaching 57 (1): 46.
- "The Paideia Seminar: active thinking through dialogue centre. 3.4 Planning step 3: Select text". Retrieved July 16, 2012.
- Chorzempa, Barbara; Lapidus, Laurie (January 2009). "To Find Yourself, Think For Yourself". Teaching Exceptional Children 41 (3): 54–59.
- Mangrum, Jennifer (April 2010). "Sharing Practice Through Socratic Seminars". Kappan 91 (7): 40–43.
- "Facing History and Ourselves: Socratic Seminar". Retrieved July 16, 2012.
- Overholser, J. C. (1993). "Elements of the Socratic method: II. Inductive reasoning". Psychotherapy 30: 75–85. doi:10.1037/0033-322.214.171.124.
- Overholser, J. C. (1994). "Elements of the Socratic method: III. Universal definitions". Psychotherapy 31 (2): 286–293. doi:10.1037/h0090222.
- Overholser, J. C. (1995). "Elements of the Socratic method: IV. Disavowal of knowledge". Psychotherapy 32 (2): 283–292. doi:10.1037/0033-3126.96.36.1993.
- Overholser, J. C. (1996). "Elements of the Socratic method: V. Self-improvement". Psychotherapy 33: 283–292.
- [dead link]
- PE Areeda, 'The Socratic Method' (1996) 109(5) Harvard Law Review 911-922
- Benson, Hugh (2000) Socratic Wisdom. Oxford: Oxford University Press.
- Frede, Michael (1992) 'Plato's Arguments and the Dialogue Form' in Oxford Studies in Ancient Philosophy, Supplementary Volume, 201-19.
- Guthrie, W. K. C. (1968) The Greek Philosophers from Thales to Aristotle. London: Routledge.
- Jarratt, Susan C. (1991) Rereading the Sophists: Classical Rhetoric Refigured. Carbondale and Edwardsville: Southern Illinois University Press.
- Sprague, Rosamond Kent (1972) The Older Sophists. Indianapolis: Hackett Publishing Company ISBN 0-87220-556-8.
- Gregory; Vlastos (1983). "The Socratic Elenchus". Oxford Studies in Ancient Philosophy 1: 27–58.
- Robinson, Richard, Plato's Earlier Dialectic, 2nd edition (Clarendon Press, Oxford, 1953).
- Philosopher.org - 'Tips on Starting your own Socrates Cafe', Christopher Phillips, Cecilia Phillips
- Socraticmethod.net Socratic Method Research Portal
- UChicago.edu - 'The Socratic Method', Elizabeth Garrett (1998)
- Teaching by Asking Instead of by Telling, an example from Rick Garlikov
- Project Gutenberg: Works by Plato
- Project Gutenberg: Works by Xenophon (includes some Socratic works)
- Project Gutenberg: Works by Cicero (includes some works in the "Socratic dialogue" format)
- The Socratic Club
- Socratic and Scientific Method | http://en.wikipedia.org/wiki/Socratic_method | 13 |
31 | |Part of a series on|
In philosophy, term logic, also known as traditional logic or Aristotelian logic, is a loose name for the way of doing logic that began with Aristotle and that was dominant until the advent of modern predicate logic in the late nineteenth century. This entry is an introduction to the term logic needed to understand philosophy texts written before predicate logic came to be seen as the only formal logic of interest. Readers lacking a grasp of the basic terminology and ideas of term logic can have difficulty understanding such texts, because their authors typically assumed an acquaintance with term logic.
Aristotle's system
Aristotle's logical work is collected in the six texts that are collectively known as the Organon. Two of these texts in particular, namely the Prior Analytics and De Interpretatione contain the heart of Aristotle's treatment of judgements and formal inference, and it is principally this part of Aristotle's works that is about term logic.
The basics
The fundamental assumption behind the theory is that propositions are composed of two terms – hence the name "two-term theory" or "term logic" – and that the reasoning process is in turn built from propositions:
- The term is a part of speech representing something, but which is not true or false in its own right, such as "man" or "mortal".
- The proposition consists of two terms, in which one term (the "predicate") is "affirmed" or "denied" of the other (the "subject"), and which is capable of truth or falsity.
- The syllogism is an inference in which one proposition (the "conclusion") follows of necessity from two others (the "premises").
A proposition may be universal or particular, and it may be affirmative or negative. Traditionally, the four kinds of propositions are:
- A-type: Universal and affirmative ("Every philosopher is mortal")
- I-type: Particular and affirmative ("Some philosopher is mortal")
- E-type: Universal and negative ("Every philosopher is immortal")
- O-type: Particular and negative ("Some philosopher is immortal")
This was called the fourfold scheme of propositions (see types of syllogism for an explanation of the letters A, I, E, and O in the traditional square). Aristotle's original square of opposition, however, does not lack existential import:
- A-type: Universal and affirmative ("Every philosopher is mortal")
- I-type: Particular and affirmative ("Some philosopher is mortal")
- E-type: Universal and negative ("No philosopher is mortal")
- O-type: Particular and negative ("Not every philosopher is mortal")
In the Stanford Encyclopedia of Philosophy article, "The Traditional Square of Opposition", Terence Parsons explains:
One central concern of the Aristotelian tradition in logic is the theory of the categorical syllogism. This is the theory of two-premised arguments in which the premises and conclusion share three terms among them, with each proposition containing two of them. It is distinctive of this enterprise that everybody agrees on which syllogisms are valid. The theory of the syllogism partly constrains the interpretation of the forms. For example, it determines that the A form has existential import, at least if the I form does. For one of the valid patterns (Darapti) is:
- Every C is B
- Every C is A
- So, some A is B
This is invalid if the A form lacks existential import, and valid if it has existential import. It is held to be valid, and so we know how the A form is to be interpreted. One then naturally asks about the O form; what do the syllogisms tell us about it? The answer is that they tell us nothing. This is because Aristotle did not discuss weakened forms of syllogisms, in which one concludes a particular proposition when one could already conclude the coresponding universal. For example, he does not mention the form:
- No C is B
- Every A is C
- So, some A is not B
If people had thoughtfully taken sides for or against the validity of this form, that would clearly be relevant to the understanding of the O form. But the weakened forms were typically ignored...
One other piece of subject-matter bears on the interpretation of the O form. People were interested in Aristotle's discussion of “infinite” negation, which is the use of negation to form a term from a term instead of a proposition from a proposition. In modern English we use “non” for this; we make “non-horse,” which is true of exactly those things that are not horses. In medieval Latin “non” and “not” are the same word, and so the distinction required special discussion. It became common to use infinite negation, and logicians pondered its logic. Some writers in the twelfth and thirteenth centuries adopted a principle called “conversion by contraposition.” It states that
- ‘Every S is P’ is equivalent to ‘Every non-P is non-S’
- ‘Some S is not P’ is equivalent to ‘Some non-P is not non-S’
Unfortunately, this principle (which is not endorsed by Aristotle) conflicts with the idea that there may be empty or universal terms. For in the universal case it leads directly from the truth:
- Every man is a being
to the falsehood:
- Every non-being is a non-man
(which is false because the universal affirmative has existential import, and there are no non-beings). And in the particular case it leads from the truth (remember that the O form has no existential import):
- A chimera is not a man
to the falsehood:
These are [Jean] Buridan's examples, used in the fourteenth century to show the invalidity of contraposition. Unfortunately, by Buridan's time the principle of contraposition had been advocated by a number of authors. The doctrine is already present in several twelfth century tracts, and it is endorsed in the thirteenth century by Peter of Spain, whose work was republished for centuries, by William Sherwood, and by Roger Bacon. By the fourteenth century, problems associated with contraposition seem to be well-known, and authors generally cite the principle and note that it is not valid, but that it becomes valid with an additional assumption of existence of things falling under the subject term. For example, Paul of Venice in his eclectic and widely published Logica Parva from the end of the fourteenth century gives the traditional square with simple conversion but rejects conversion by contraposition, essentially for Buridan's reason.
- A non-man is not a non-chimera—Terence Parsons, The Stanford Encyclopedia of Philosophy
The term
A term (Greek horos) is the basic component of the proposition. The original meaning of the horos (and also of the Latin terminus) is "extreme" or "boundary". The two terms lie on the outside of the proposition, joined by the act of affirmation or denial. For early modern logicians like Arnauld (whose Port-Royal Logic was the best-known text of his day), it is a psychological entity like an "idea" or "concept". Mill considers it a word. To assert "all Greeks are men" is not to say that the concept of Greeks is the concept of men, or that word "Greeks" is the word "men". A proposition cannot be built from real things or ideas, but it is not just meaningless words either.
The proposition
In term logic, a "proposition" is simply a form of language: a particular kind of sentence, in which the subject and predicate are combined, so as to assert something true or false. It is not a thought, or an abstract entity. The word "propositio" is from the Latin, meaning the first premise of a syllogism. Aristotle uses the word premise (protasis) as a sentence affirming or denying one thing of another (Posterior Analytics 1. 1 24a 16), so a premise is also a form of words. However, as in modern philosophical logic, it means that which is asserted by the sentence. Writers before Frege and Russell, such as Bradley, sometimes spoke of the "judgment" as something distinct from a sentence, but this is not quite the same. As a further confusion the word "sentence" derives from the Latin, meaning an opinion or judgment, and so is equivalent to "proposition". The logical quality of a proposition is whether it is affirmative (the predicate is affirmed of the subject) or negative (the predicate is denied of the subject). Thus every philosopher is mortal is affirmative, since the mortality of philosophers is affirmed universally, whereas no philosopher is mortal is negative by denying such mortality in particular. The quantity of a proposition is whether it is universal (the predicate is affirmed or denied of all subjects or of "the whole") or particular (the predicate is affirmed or denied of some subject or a "part" thereof). In case where existential import is assumed, quantification implies the existence of at least one subject, unless disclaimed.
Singular terms
For Aristotle, the distinction between singular and universal is a fundamental metaphysical one, and not merely grammatical. A singular term for Aristotle is primary substance, which can only be predicated of itself: (this) "Callias" or (this) "Socrates" are not predicable of any other thing, thus one does not say every Socrates one says every human (De Int. 7; Meta. Δ9, 1018a4). It may feature as a grammatical predicate, as in the sentence "the person coming this way is Callias". But it is still a logical subject.
He contrasts "universal" (katholou, "whole") secondary substance, genera, with primary substance, particular specimens. The formal nature of universals, in so far as they can be generalized "always, or for the most part", are the subject matter of both scientific study and formal logic.
The essential feature of the syllogistic is that, of the four terms in the two premises, one must occur twice. Thus
- All Greeks are men
- All men are mortal.
The subject of one premise, must be the predicate of the other, and so it is necessary to eliminate from the logic any terms which cannot function both as subject and predicate, namely singular terms.
- All men are mortals
- All Socrates are men
- All Socrates are mortals
This is clearly awkward, a weakness exploited by Frege in his devastating attack on the system (from which, ultimately, it never recovered, see concept and object).
The famous syllogism "Socrates is a man ...", is frequently quoted as though from Aristotle, but fact, it is nowhere in the Organon. It is first mentioned by Sextus Empiricus in his Hyp. Pyrrh. ii. 164.
Influence on philosophy
|This section is empty. You can help by adding to it. (March 2013)|
Decline of term logic
Term logic began to decline in Europe during the Renaissance, when logicians like Rodolphus Agricola Phrisius (1444–1485) and Ramus began to promote place logics. The logical tradition called Port-Royal Logic, or sometimes "traditional logic", claimed that a proposition was a combination of ideas rather than terms, but otherwise followed many of the conventions of term logic. It was influential, especially in England, until the 19th century. Leibniz created a distinctive logical calculus, but nearly all of his work on logic was unpublished and unremarked until Louis Couturat went through the Leibniz Nachlass around 1900, publishing his pioneering studies in logic.
19th century attempts to algebratize logic, such as the work of Boole and Venn, typically yielded systems highly influenced by the term logic tradition. The first predicate logic was that of Frege's landmark Begriffsschrift, little read before 1950, in part because of its eccentric notation. Modern predicate logic as we know it began in the 1880s with the writings of Charles Sanders Peirce, who influenced Peano and even more, Ernst Schröder. It reached fruition in the hands of Bertrand Russell and A. N. Whitehead, whose Principia Mathematica (1910–13) made splendid use of a variant of Peano's predicate logic.
Term logic also survived to some extent in traditional Roman Catholic education, especially in seminaries. Medieval Catholic theology, especially the writings of Thomas Aquinas, had a powerfully Aristotelean cast, and thus term logic became a part of Catholic theological reasoning. For example, Joyce (1949), written for use in Catholic seminaries, made no mention of Frege or Bertrand Russell.
A revival
Some philosophers have complained that predicate logic:
- Is unnatural in a sense, in that its syntax does not follow the syntax of the sentences that figure in our everyday reasoning. It is, as Quine acknowledged, "Procrustean," employing an artificial language of function and argument, quantifier, and bound variable.
- Suffers from theoretical problems, probably the most serious being empty names and identity statements.
Even academic philosophers entirely in the mainstream, such as Gareth Evans, have written as follows:
- "I come to semantic investigations with a preference for homophonic theories; theories which try to take serious account of the syntactic and semantic devices which actually exist in the language ...I would prefer [such] a theory ... over a theory which is only able to deal with [sentences of the form "all A's are B's"] by "discovering" hidden logical constants ... The objection would not be that such [Fregean] truth conditions are not correct, but that, in a sense which we would all dearly love to have more exactly explained, the syntactic shape of the sentence is treated as so much misleading surface structure" (Evans 1977)
See also
- Parsons, Terence (2012). "The Traditional Square of Opposition". In Edward N. Zalta. The Stanford Encyclopedia of Philosophy (Fall 2012 ed.). 3-4.
- They are mentioned briefly in the De Interpretatione. Afterwards, in the chapters of the Prior Analytics where Aristotle methodically sets out his theory of the syllogism, they are entirely ignored.
- Arnauld, Antoine and Nicole, Pierre; (1662) La logique, ou l'art de penser. Part 2, chapter 3
- For example: Kapp, Greek Foundations of Traditional Logic, New York 1942, p. 17, Copleston A History of Philosophy Vol. I., p. 277, Russell, A History of Western Philosophy London 1946 p. 218.
- Copleston's A History of Philosophy
- Bocheński, I. M., 1951. Ancient Formal Logic. North-Holland.
- Louis Couturat, 1961 (1901). La Logique de Leibniz. Hildesheim: Georg Olms Verlagsbuchhandlung.
- Gareth Evans, 1977, "Pronouns, Quantifiers and Relative Clauses," Canadian Journal of Philosophy.
- Peter Geach, 1976. Reason and Argument. University of California Press.
- Hammond and Scullard, 1992. The Oxford Classical Dictionary. Oxford University Press, ISBN 0-19-869117-3.
- Joyce, George Hayward, 1949 (1908). Principles of Logic, 3rd ed. Longmans. A manual written for use in Catholic seminaries. Authoritative on traditional logic, with many references to medieval and ancient sources. Contains no hint of modern formal logic. The author lived 1864-1943.
- Jan Łukasiewicz, 1951. Aristotle's Syllogistic, from the Standpoint of Modern Formal Logic. Oxford Univ. Press.
- John Stuart Mill, 1904. A System of Logic, 8th ed. London.
- Parry and Hacker, 1991. Aristotelian Logic. State University of New York Press.
- Arthur Prior, 1962. Formal Logic, 2nd ed. Oxford Univ. Press. While primarily devoted to modern formal logic, contains much on term and medieval logic.
- --------, 1976. The Doctrine of Propositions and Terms. Peter Geach and A. J. P. Kenny, eds. London: Duckworth.
- Willard Quine, 1986. Philosophy of Logic 2nd ed. Harvard Univ. Press.
- Rose, Lynn E., 1968. Aristotle's Syllogistic. Springfield: Clarence C. Thomas.
- Sommers, Fred, 1970, "The Calculus of Terms," Mind 79: 1-39. Reprinted in Englebretsen, G., ed., 1987. The new syllogistic New York: Peter Lang. ISBN 0-8204-0448-9
- --------, 1982. The logic of natural language. Oxford University Press.
- --------, 1990, "Predication in the Logic of Terms," Notre Dame Journal of Formal Logic 31: 106-26.
- -------- and Englebretsen, George, 2000. An invitation to formal reasoning. The logic of terms. Aldershot UK: Ashgate. ISBN 0-7546-1366-6.
- Szabolcsi Lorne, 2008. Numerical Term Logic. Lewiston: Edwin Mellen Press.
- Term logic at PhilPapers
- Aristotle's Logic entry by Robin Smith in the Stanford Encyclopedia of Philosophy
- Term logic entry in the Internet Encyclopedia of Philosophy
- Aristotle's term logic online—This online program provides a platform for experimentation and research on Aristotelian logic.
- Annotated bibliographies of writings by:
- PlanetMath: Aristotelian Logic.
- Interactive Syllogistic Machine for Term Logic A web based syllogistic machine for exploring fallacies, figures, terms, and modes of syllogisms. | http://en.wikipedia.org/wiki/Aristotlean_logic | 13 |
22 | |Click on the title to view more information about a particular curriculum.
|4-H Sportfishing Aquatic Resources Education Program (SAREP)
||These activities are designed to help "hook" kids with a broader
message about aquatic resources and the need to respect and
conserve them. They were intended to be used as the basis for 4-H
club meetings and activities. Activities published individually in
20 separate booklets include almost everything about fishing from
"how to fish" in a variety of settings to "minimizing your intake
of fish contaminants." Note explicit commitment to and focus upon
affective learning. Binder contains all supplemental materials
listed in Activity Booklets. Introductory chapters include
|4-H Wetland Wonders
||This interdisciplinary curriculum is designed for grades 4 and 5, and
focuses on ecosystems in Oregon. It consists of eight units, in sequential
order, to develop and reinforce water quality concepts through a
combination of field, laboratory and classroom activities. Pre- and
post-surveys help the educator evaluate student comprehension. The
curriculum is accompanied by a resource trunk available throught the 4-H
||This GEMS Teacher's Guide presents detailed lesson plans for 8 class sessions that engage students in a variety of activities that lead to a broad understanding of acid rain. Science concepts of pH, effects of acids on various materials and systems; skills of observation, measurement, data collection, drawing conclusions and synthesis of information are developed. One particular strength of this unit is the empowerment it affords students as they combine personal and social perspectives working to explore alternative solutions. The range of activities--lab experimentation, discussions, reading and writing, simulations and role play, and a game--make the unit applicable to all learning styles. Excellent teacher preparation and background material provided. Science concepts of pH, effects of acids on various materials and systems; skills of observation, measurement, data collection, drawing conclusions and synthesis of information are developed.
|Acid Rain: A Student's First Sourcebook
||This sourcebook offers students information about acidity and the pH scale; the role of air pollution in acid precipitation and dry deposition; and the effects of acid rain on forests, aquatic habitats, man-made materials and people. It outlines potential solutions, including continued research, alternative energy sources, restoration and conservation. Nine experiments are included which measure pH in a variety of substances and simulate acidic conditions to assess the impact on variety substances.
This document has been combined with other materials, updated, expanded and reformatted to cover a broader range of topics. New version is available online at http://www.epa.gov/airmarkets/acidrain/index.html
|Active Watershed Education Curriculum Guide, It's AWEsome! (formerly The Pawcatuck Watershed Curriculum)
||This material replaces The Pawcatuck Watershed Curriculum. This guide takes a thematic approach to teaching about watersheds.
Authors address several components of watersheds, including wetland
ecology, soils, point and non-point source pollution, and cultural
and historical land uses. Text includes pre and post tests for
students. Curriculum is well-organized and provides thorough
background information for educators. Also includes an Appendix
that provides suggestions on how to adapt the program activities to
|Activity Guide for Teachers, An: Everglades National Park
||This unit-based, multi-resource guide provides 4-6th grade teachers
with the tools to teach about the varied Everglades ecosystem. The
curriculum addresses many of South Florida's water issues, e.g.,
human population growth, water diversion from the Everglades, water
quantity regulated to the Everglades, overharvesting of fish and
shrimp, and disruption of the estuarian food chain. The five
appendices include background information, supplemental classroom
materials, songs, vocabulary, bibliograghy, and resource lists.
||Written for grades 7-10, this curriculum places emphasis on land
use within a watershed and less on water quality monitoring;
activites encourage youth to apply observational skills when
monitoring a stream and rely less on quantitative results from test
equipment. Thorough background information for teachers and
students. Packet includes the curriculum notebook plus an angler
education program guide, aquatic plant guide, and macroinvertebrate
guide and poster.
|Adopt-a-Stream: Teacher's Handbook
||Adopt-a-Stream is a program that gives high school students the skills and information they need to "adopt" a waterway. Relying on community cosponsors, students employ field study with follow-up water quality testing and data analysis which culminates in a final presentation on the environmental health of the waterway to the public. The handbook gives detailed explanations of water quality indicators and the procedures for testing them.
|Adventures of Wally, the Water Molecule
||A resource to aid in teaching about the chemistry of water.
Materials are designed to provide active learning opportunities for
grades K - 3. An accompanying video assists instructors in learning
to use active learning strategies. Some concepts and vocabulary
contained in the learning activities may be too abstract for young
children (e.g, volume, mass and density).
|Alabama Water Quality Curriculum
||This online resource, specific to the state of Alabama, has sections for upper elementary, middle- and high-school study. Units include background information, student fact sheets, worksheets, and activities, The material includes 18 Alabama Cooperative Extension System Fact Sheets on specific environmental issues. It was developed for nonformal group study, such as 4-H clubs, or enrichment material for regular classroom use.
|All the Rivers Run
||Using a watershed approach, this curriculum guide is designed to create a holistic, theme-based on-site experience for a four-day residential program. The curriculum combines art, science, multiculturalism, global connection, and environmental responsibility in an artistically presented format.
|Always a River: Supplemental Environmental Ed Curriculum on the Ohio River & Water
||This curriculum includes four primary objectives: to demonstrate
that the Ohio River is part of a total ecosystem; to introduce the
science of water and its importance to living things; to explore
human use and environmental impacts of human activity; and to
examine the influence of the river on historical and modern
culture. The "Careers on the River" activity is unique_authors
suggest holding a "career day." Includes appendices on making
aquaria, guidelines for interviewing people, and field ethics.
||Aquatic Ecosystems is one of the middle school units of the K-12 Adopt-A-Watershed curriculum. This hands-on unit contains classroom study with extensive fieldwork to observe, define and monitor a wetland ecosystem. At the onset, the group develops class and individual rubrics for unit assessment. Ecosystem mapping guides students to identify components of an ecosystem as well as conceptualize relatedness among components through feedback loops. A class water quality improvement activity, either public education or a restoration project involves students with the community and its resources. Students graphically represent their observations in a variety of media throughout the unit. A Watershed Art Show featuring this visual snapshot of their study is the culmination of the unit.
|Aquatic Environment Education: School Enrichment
||Primarily a guide rather than a curriculum. These materials support a university
extension program. In addition to the curriculum guide, the program includes videos,
an aquarium stocked with fish, and 12 fact sheets to support a fish culrure project.
The program strategy offers a unique opportunity to connect youth with actual experience
with a natural resource professional. Video content was not reviewed. Materials can be
used independent of videos, but will require teachers to develop their own activities.
|Aquatic Habitats: Exploring Desktop Ponds
||Aquatic Habitats uses an inquiry-based approach to guide 2nd to 6th grade students to discover and understand the concepts of habitat, food webs, life cycles, adaptation, decomposition, interdependence, animal structures and behavior, biological control and environmental characteristics. Each idea develops as small groups of students assemble a desktop pond and introduce new biological elements, observe and make predictions. A culminating field trip to a pond includes field activities that deepen students' understanding; in-class investigations are offered as an alternative to the field experience.
||This curriculum supplements Project WILD, an inter-disciplinary,
supplementary environmental and conservation education program
emphasizing wildlife. Activities in this guide emphasize water
habitats that support wildlife. Research data links use of Aquatic
Wild activities with learning outcome. Instructors must complete a
training program in order to receive materials.
Each activity is summarized according to student age, subjects,
skills, duration, group size, setting, conceptual framework
reference, and key vocabulary. The Background section addresses the
main concepts to conduct the activity. Materials include
suggestions for aquatic extensions of existing Project Wild
instructional activities. Exceptional appendix materials,
including: extensions to existing Project WILD activities, use of outdoors as a classroom, conceputal framework as a basis for activities, activities cross referenced by grade, subject, skills & topic, activity length, indoor and outdoor activities
|Arid Lands, Sacred Waters
||Arid Lands, Sacred Waters is designed to augment the concepts presented in the New Mexico Museum of Natural History's exhibit on the importance of water; selected activities can also be used or adapted to other regions. Topics addressed include the hydrologic cycle; groundwater and surface water interactions; components and interrelationships within a riparian food web; importance of water quantity and quality to plants and animals; water treatment; household water cosumptions; influence of water availability on the location of different people in New Mexico from ancient times onward; and decision-making concerning water-related environmental and cultural issues. Arid Lands, Sacred Waters is one of the few curricula to address population growth as it relates to water consumption and the quantity of water available for human use now and in the future. Each activity includes an objective, time frame, grade level, materials list, directions, and brief background information. Additional resources are required in order to complete some of the activities. The curriculum is also available in Spanish.
|Be Water Wise
||The goals of this curriculum includes: helping users understand
that water plays a critical role in our daily lives; understanding
why water should be used wisely; and making users more
conscientious about conserving water. Materials include a student
activity book for ages 12 and above in addition to the instructor's
guide. The resource was designed for flexibility either as a school
supplement or as a resource for other groups interested in water
|Captain Hydro and the Further Adventures of Capitan Hydro
||Designed as a comic book for middle school students, Captain
Hydro covers the water cycle-natural and built, water use, and water
conservation and management.=The Further Adventures of Captain Hydro=,
for grades 8-10, concentrates on world history and geography. Field
experiences are provided as "homework". Two simulation exercises in
Captain Hydro help develop community problem solving skills.
|Caring for Our Lakes: A Curriculum on the Yahara Watershed
||A local resource that demonstrates how a curriculum can be designed
to further educational goals about a local water resource, lakes.
Yet, includes aspects that are applicable to any community with
small lakes in its watershed. Goals for students to achieve
include: understanding lakes as part of a larger ecosystem; ability
to identify problems and issues concerning the Yahara lakes;
familiarity with geography of the watershed; and recognition of
human activities related to lake problems.
|Caring for Planet Earth
||The six lessons in this packet focus on waste, water, air and energy. Developed at Oklahoma State University in conjunction with the EPA for the 4-H Youth Development Department, it serves as a school enrichment program.
|Child's Place in the Environment:Caring for Aquatic Systems, A
||A Child's Place in the Environment: Caring for Aquatic Ecosystems is an interdisciplinary, thematic curriculum requiring students to construct knowledge. Produced by the California Department of Education, the units are conceptually correlated to the Science Framework for California Public School: although materials are applicable in other areas of the country as well. 'Caring for Aquatic Ecosystems' is one module in the grade 1-6 series, A Child's Place in the Environment. The module is organized around four concepts: 1)water cycles through living and nonliving things; 2)water is essential to all living things; 3)the ways people acquire and use water affect living things, and; 4)people can choose to conserve water, maintain or improve its quality, and protect specific bodies of water. Numerous literature selections -not included in the evaluation- illustrate and link together the core concepts. Working in groups, students investigate the purification of water through evaporation in the water cycle; observe capillary action in a plant; interpret California maps and identify natural and built water systems, determine ways water can be conserved, and; critically analyze advertising brochures from environmental organizations.
||Cleaning Water is part of the Foundations and Challenges to Encourage Technology-based Science (FACETS) program. Divided into 8 modules each for grades 6-8, Cleaning Water is a 3-week module divided into 6 activities for grade 7. In this module, students study the issue of clean drinking water from the standpoint of the home filter market. Activities include identification of contaminants in home water systems; investigation of the sources(s) of contamination through topographic map analysis; water quality testing, and; experimentation with contamination removal throught the use of different filtering devices. The module concludes as students design, build, and test a home water filter; each group then creates a marketing plan for each other's filters.
|Clear Water, Streams & Fish:A Holistic View of Watersheds
||Both curricula are written to help elementary (grades 6-9) and secondary (grades 9-12) youth understand watersheds,
the effects of human activities within watersheds, the effects of human activities within watersheds, and how to
minimize those effects. Week-long, interdisciplinary lesson plans focus on fish life focus on fich life cycles and
habitat, stream dynamics, natural and human activities. Youth are then exposed to various controversies and issues
that occur in the Pacific Northwest such as private and commercial fishing, Indian Treaty Rights, development and
logging. The "Solutions" unit suggests ways to address problems within the watershed.
|Coastal Georgia Adopt-a-Wetland Curr. Guide for Grades 3-12
||The twenty lessons in this curriculum cover attributes of watersheds, estuaries and wetlands, impacts of natural and human activity on them, orienteering and geocaching. Activities include data collection and classification. The unit concludes with a Role Play for Wetland Resources. Also included are Fact Sheets on Invasive Species, Native Species, and Habitats, Processes & Legislation.
|Coastal Issues: A Wave of Concern
||Activities written for high school students focus on decision-
making skills as they relate to coastal development, recreation,
tourism, and aesthetic concerns. Case studies represent real
coastal community issues.
|Comprehensive Water Education Book (formerly Water Education), The
||Activities for school setting seek to develop water literacy
through active learning. Activities stress comprehension of water
concepts, development of attitudes about water issues, and skills
to solve water issue problems. Concepts/vocabulary may be difficult
for K-6 graders (eg, porosity, saturation, volume, density).
|Connections to the Sea, A 4-H Guide to Marine Education
||Materials focus on ocean ecology, hydrology, and pollution sources
through student field investigations. Unique activities cover
mapping and map reading, and environmental sensitivity. An
extensive "related activities" section includes activities for the
visual arts, sea food, impact of the ocean on people's lives,
environmental issues, and plant collections. Also includes a small
field guide to Maine Atlantic organisms in the booklet. Materials
do not specify an age, but appear to be designed for middle school
through high school youth.
||The Cool Classroom is a series of Internet-based instructional modules that link middle and high school classrooms with active research investigations at the Rutgers Marine and Coastal Services COOLroom, a collaboration of oceanographers studying the coastal ocean. Five interdisciplinary projects, as well as two physics projects, a biology project, and an earth science project use real-time or near real-time data to support learning the science concepts.
|Creek Watchers: Exploring the Worlds of Creeks and Streams
||This curriculum is one in a series of five by the California
Aquatic Science Education Consortium (CASEC). Creek Watchers aims
to encourage youth groups and leaders to explore creek and stream
ecosystems. Youth get hands-on experience with stream habitat,
inhabitants, and the effects from surrounding land use within a
watershed. Activities are designed to help youth apply basic
science concepts such as observing, comparing, inferring, and
analyzing. Students receive "Task Cards" and "Lab Notebook" sheets
to record their findings. Authors provide ideas for stream action
projects and list local California resources to contact for those
|Curriculum Guide for Wetland Education, A
||This K-8 curriculum guide was produced for school districts of Oswego County, NY. It consists of concepts related to ecology, plant and animal life in wetlands and an overview of conservation policies for the classroom teacher. Wetlands of Oswego County are identified and catagorized for field study sites. Fifteen hands-on activities (some drawn from other programs or publications) and nine lesson plans are the instructional materials of the unit. Extensive background information is presented.
|Decision-Making: The Chesapeake Bay
||The major goal of this curriculum is for students to identify and
analyze conflicting interests, issues, and public policies
concerning the Chesapeake Bay. Youth then determine their effects
on people and their environment. Instructional time can range from
15 class sessions to an entire semester. Through the 5 educational
components (introduction, videotape, simulation, reference source
and application) educators may choose to use the materials
independently or incorporated into existing instructional units.
Instructor training is required.
|Discover a Watershed: The Everglades
||A comprehensive curriculum focusing on the Kissimmee-Lake Okeechobee-Everglades (K-O-E) ecosystem. 'Discover a Watershed: The Everglades' is divided into three sections:
1)The Natural Watershed: Pieces of a Puzzle - a thorough reference section on the natural and human history of the area. Natural history topics include the hydrology, geology, climate, animal and plant species, and communities comprising the K-O-E ecosystem. Discussion of the history of human occupation and change within this unique ecosystem begins with the early Native American groups and continues through the establishment of the Everglades National Park in 1947. 2)The Altered Watershed: Rearranged Pieces - a discussion of contemporary issues and problems resulting from the cultural and ecological demands/changes placed on the K-O-E ecosystem by a rapidly increasing population. 3)Investigations: Putting the Pieces Together - a collection of fifteen interdisciplinary activities correlated with chapters in the reference section.
Designed to fit the needs of diverse educators; in its entirety, the curriculum provides a six-to-eight week course of study on the watershed. Individual activities and reference sections can also stand alone for use by formal and nonformal educators in various disciplines.
||These materials were developed to enhance the ability of the
Washington State Department of Ecology in preserving and managing
wetlands in Washington.
Activities address the definition of a wetland, wetland field studies, wetland functions, and human effects on wetlands. The materials were designed as a unit or integrated into existing curricula. Materials are activity based and applicable to other regions of the country. An interesting aspect of this resource is that it focuses on the idea that both action and inaction affect the outcome of environmental issues.
|EARTH: The Water Planet
||A collection of water activities to encourage problem-solving and
critical thinking skills for middle elementary students. Primarily
indoors science activities. A "Guide to Activity" and detailed
background "Readings" sections provided for each module. Overall
curriculum theme is equity and scientific literacy for everyone.
|Ecological Citizenship (EcoCit). 5th Grade. Precious Water
||This is one of nine units in the Eco-Cit urban environmental education
program written for grades K-8. "Precious Water" is designed for 5th
graders. The multi-disciplinary, action-oriented curriculum involves
students, parents, teachers and the community. Topics covered include the
water cycle, human inputs, and ways to conserve water resources. Eco-Cit
is based on a philosophy of constructivist and cooperative learning for
|Energy, Economics and the Environment: Case Studies and Teaching
Activities for Elementary School
||Designed to teach elementary students the interrelationships between
economics and environmental issues, this unique curriculum provides
students with a conceptual framework to help address human-induced
environmental problems. Activities center on three areas: knowledge and
concepts, effective decision-making skills, and action projects. There are
four interdisciplinary teaching units that focus on basic economic
principles, and forest, water and energy resources.
|Env. Resource Guide: Nonpoint Source Pollution Prevention
||These units provide basic information on the relationships between land use and water quality--specifically nonpoint source water pollution. Four grade ranges address K-12 classrooms. Each level addresses pollution sources; point vs. nonpoint; sediment, nutrient, bacterial and toxic pollution; agricultural, urban, mining, forestry and industrial sources; as well as best management practices.
||Estuary-Net focuses on point and non-point source pollution problems in estuaries and watersheds to highlight the value of long-term data collection and analysis, the scientific process and its contributions to problem-solving, and the importance of telecommunications as a valuable networking tool. The curriculum is organized into 3 levels: UNDERSTANDING WATER QUALITY introduces students to watershed variables and processes through hands-on classroom activities; WATER QUALITY MONITORING/DATA COLLECTING leads students to develop and implement a water sampling plan that is then applied to a local stream site; and USING AND IMPROVING MONITORING DATA incorporates development of quality assurance action plans. Throughout the unit, students employ telecommunications networking to collaborate with other agencies and school groups that are also collecting data in their problem-solving activities.
|Experiencing Water Resources: A Guide to Your River Basin
||A teaching package designed for use with 3rd-5th grade classes. It is specially tailored for teaching about the resources and issues in a specific river basin. Materials are provided for both teacher and students.
|Farming Louisiana’s Water. A 4-H Aquaculture Project, Grades 7-9
||This student workbook focuses on aquaculture. Written for grades 7 through
9, the principles of aquaculture include: the history of aquaculture; job
of fish farmers; aquacultural techniques regarding feeding, controlling
predators and unwanted animals; and harvesting, processing and marketing.
There are twelve activities for each grade level followed by a project to
develop and maintaining an aquarium. Provides educators with extensive
background information and instructions.
|Fishy Science. A hands-on approach to learning about fish
||Middle school-aged youth learn about buoyancy, osmosis and respiration
while studying fish physiology. Activities are classroom-based using an
aquarium and goldfish (or other hardy species). Through observation and
comparison of fish sensory perception, youth draw connections between
human and fish. Youth receive activity sheets that encourage investigation
|Flood Teacher's Guide, Videocassette, and Student Edition: Event-Based Science Series
||An interdisciplinary module in the Event-Based Science series, 'Flood' tells the story of the Great Flood of 1993 through newspaper articles, video footage and personal interviews. Students study the cause and effects of floods through exploration of stream and river dynamics in 11 activities. Using additional resources and knowledge gained through activity completion, five member teams of students design a new national park along the St. Joe River in Idaho to demonstrate stream system dynamics. Profiles of professionals involved in park design- such as landscape architects, hydrologists, cartographers, geologists, and forest recreation technicians- are included in the curriculum. The module concludes with a presentation of park plans and advertising brochures to the entire class.
|Florida 4-H Marine Science Program
||Curriculum objectives center on how to teach youth to use simple
field gear and to understand the relationships between ecosystem
components. Materials include: a leader's guide, a member's guide,
a project guide, and a project record book. Leader and member
guides provide instructions for conducting and evaluating field
guides to 6 marine ecosystems. The member's guide provides
background material on organisms found in ocean ecosystems. The
project guide and record book complement the curriculum and are
meant to be used while visiting an oceanarium. Authors do not
specify a target audience, but seem designed for 6th grade and
older. Activities are dependent on leader direction.
|Fragile Fringe, The: A Guide for Teaching about Coastal Wetlands
||Available on the world wide web, 'The Fragile Fringe' uses activities and background information to provide a framework for the study of coastal wetlands. The curriculum is divided into six modules -each identifying activities for different grade levels: Where Are the Wetlands?; The Mississippi River: Draining a Majority of the United States; Beneficial Functions of the Wetlands; Barrier Islands as Part of and Protection for the Wetlands; Loss of Wetlands: Subsidence; and Wetland Loss: Digging of Canals. Options for student activities include visiting a wetland and collecting plant specimens; constructing a model watershed; simulating predator/prey dynamics; investigating run-off, and; demonstrating subsidence.
|Freshwater Guardians: Defending Our Precious Supply
Freshwater Guardians: Defending Our Precious Supply
||Developed for 10-15 year olds, this CASEC guide is one of five in
a series. Activities help youth understand the sources and effects of freshwater pollution. "Task Cards" and "Lab Notebook" sheets are provided for students to record their results. The overall activity objective is that students learn science by doing. (Spanish version available)
Students are encouraged to make predictions and explore alternate
perpectives to addressing problems, issues and questions.
|From Ridges to Rivers: Watershed Explorations. Stage Two: Ages 12-15
||In this guide, adult leaders learn to work with teens, ages 12-15, in
non-formal educational settings. There are three goals: to help learners
understand their watershed; to develop scientific inquiry and critical
thinking skills; and to encourage active, intelligent care of the earth’s
natural resources. Activities use watershed models to encourage hands on
learning and to realize conflicting viewpoints on environmental issues.
|Gee-Wow! Adventures in Water Education
||This curriculum was developed as part of the Groundwater Education
in Michigan (GEM) Program. The goal is to enable teaching of
concepts related to water, groundwater, and pollution prevention.
It includes 28 activities and a video, It's Found Underground:
Groundwater Our Buried Treasure. Lessons may be taught as a unit or
used separately to supplement other classroom activities. Includes
an index cross-referenced by title, grade, subject area and
|Give Water A Hand. Youth Action Guide and Leader Guide
||Youth can make a difference through watershed-based, community action
projects. Using the service-learning approach to environmental issues,
youth, age 9-14, gain experience to in addressing water-related problems.
The Youth Action Guide feature a series of activities that walk youth
through investigation, choosing a project, planning for action, taking
action and evaluation (65 pages). In the Leader Guide, adults will find
tips on skill development, background information for each activity, and
how to use experts as project collaborators (33 pages).
|Great Lakes In My World, The
||Activities are designed to increase awareness and appreciation for
the Great Lakes by including them in regular curriculum units for
all disciplines. Activities cover cultural issues, current
management concerns, and natural processes. Manual includes an
index listing appropriate grade and subject area in which to
include Great Lakes material.
|Ground Water Education for Secondary Students
||A booklet containing background information and activities designed to teach students about aquifers, and the interrelationship between ground and surface waters. The importance of water conservation, pollution prevention, and water resource management issues are also addressed. The curriculum incorporates lectures, laboratory activities, games, demonstrations, and assessment activities.
|Groundwater Adventure, The
||This curriculum is part of the Water Environment Federation's
package designed to educate the public about important water
quality issues. Topic materials are provided in a "building block"
approach to allow flexibility in fitting the materials into an
existing school curriculum. Each set includes a video and student
activity guide. Activities in this set address how to clean up
groundwater contamination in more detail than other curricula.
|Groundwater Education Program, Parts 1,2 & 3
||The purpose of developing these materials was to enhance
groundwater quality through implementation of action-oriented
groundwater programs at the local level. This is a curriculum
designed for use as an in-school science unit, but was developed
with the help of a 4-H extension specialist. Contents of this kit
are comprehensive, including for each of the 3 parts: a teacher's
guide; booklet with information and suggested activities; an
Arlegan County 4-H Resources catalog; equipment needed for
classroom activities; additional resources including other
curricula; fact sheets; and informational tests. Materials need to
be adapted for younger end of suggested grade range.
|Groundwater Protection Curriculum Guide and "Groundwater - The Hidden Resource" videotape
||Information, video, and activity ideas designed to familiarize
students with the source of their drinking water, the management of
waste water, how groundwater becomes polluted, and how groundwater
pollution can be prevented. Information materials provide in-depth
background about Missouri hydrogeology.
|Groundwater Study Guide-DNR
||Resource packet and activity ideas. Activities focus on: the water
cycle and hydrogeology, groundwater contamination, water and waste
water treatment, water conservation, and groundwater use rights.
Written materials may be challenging for 6th graders, the younger
end of suggested grade range.
|Groundwater: A Vital Resource
||A series of 23 activities on four topics: the water cycle, water
distribution in soils, water quality, and community impacts on
groundwater. Each topic includes activities for a range of ages.
Strong technical/science orientation. Limited integration with
daily life of the youth.
|H2O Below: An Activity Guide for Groundwater Study
||Developed as part of the Illinois Middle School Groundwater Project, this curriculum focuses on the interdisciplinary study of the geology/hydrology dynamics of groundwater movement and quality. Students observe a groundwater flow model, study the porosity and permeability of different soils, construct a water filtration device, analyze home water use and conservation practices, participate in a decision-making simulation involving a hazardous waste disposal site, and develop a survey instrument to identify -and take action on- a local groundwater issue. Chapters include Water and Why it is Important; How Water Moves Through the Ground; How Water Becomes Polluted, Clean Water Through Filtration; Protecting and Conserving Groundwater; Testing Groundwater; and Groundwater Issues. Activities within each chapter are correlated with Illinois State Goals, in addition to including objectives, background information, materials lists, vocabulaty, procedures, students worksheets, evaluation suggestions, and extensions.
|Health Environment-Healthy Me:Exploring Water Pollution, 4th grade
||Part of a series of environmental and occupational health curricula designed to
supplement school curricula in grades K-6. The series provides a different topic
for each grade. This topic is presented in 15,45-to 60- minute units. Many units
focus on wastewater treatment. Describes how water becomes polluted and how to
to prevent pollution, but does not emphasize how drinking water is treated before
|Hidden Treasure. Instructional Materials for Groundwater Resource Protection, A
||Designed as a supplement for the school curriculum, these materials
focus on the relationship between agriculture and groundwater.
Includes unique sections on "Best Management Practices,"
groundwater protection in urban settings, managing underground
storage tanks and water testing. Students design management plan
for proper lawn care. Covers both rural and urban issues.
|Hoover Dam: Teacher/Student Learning Packet
||This curriculum, developed by the Bureau of Reclamation, strives to meet the goals of the Dept. of the Interior by "helping students understand how the decisions of the past helped shape their lives and future." It is divided into four areas of study: history, wildlife, water resources and hydroelectricity. Each section offers information and suggests activities to address 2 to 5 main concepts. This curriculum narrowly focuses on the Colorado River.
|How Well is Your Water? Protecting Your Home Groundwater
||Written for grades 7-9, this activity booklet guides independent
investigation, assessment, analysis and action on well and groundwater
contamination. Activities may be adapted to other regions and suitable for
formal, informal and nonformal educational settings.
|Indoor River Book, The
||This book is part of the COMMON ROOTS GUIDEBOOKS series. In collaboration with adult facilitators, students build an indoor river system modeled on a local river. The design and assembly stage require two school mornings with the assistance of an experience carpenter. A materials list, diagrams, and photographs of completed products are included. Activities included in 'The Indoor River' integrated the sciences, math, social studies, design technology, language and creative arts.
|Instructor's Guide to Water Education Activities
||Intended as a general water curriculum. Materials and activities
integrate water science concepts with water use applications and
|Investigating Groundwater: The Fruitvale Story
||Designed for middle to high school youth, this module closely
resembles steps taken in a real water contamination situation,
e.g., identify the problem, research, community involvement,
decision-making and action. Requires the use of a chemistry kit.
Activities build on each other; this curriculum represents one
|Investigating Streams and Rivers
||Recommended for use with "Field Manual for Water Quality
Monitoring" by Mark K. Mitchell and Wm. B. Stapp. However, only
activities 4 and 5 require use of manual. Unique in that activities
provide a mechanism for learning some fundamentals of political
action (e.g., making contacts, group concerns about problem/issue
of process, interview and phone skills, developing action plans).
Excellent guidance in developing, implementing and evaluating
action plan. Activities can be complemented by participation in the
Global Rivers Environmental Education Network (GREEN)-sponsored
computer conferences. Materials contain suggestions for using
computer network to enhance student understanding. Manual includes
user evaluation/feedback form.
||A good review of basic principles on water science, the water cycle,
groundwater, wetlands, water quality and quantity issues, and water
conservation actions for grades 5 and 6. Contains lesson plans, worksheets
and activities to complement an accompanying video. Uses examples specific
|Jason XIV: From Shore to Sea
||The Jason Project publishes a yearly science expedition linked to actual research by scientists in the field. Each program combines an inquiry-based print curriculum, video supplement, live telepresence during a 2-week live expedition broadcast and a gated online community of Jason Project participants. The research project associated with From Shore to Sea was completed in the spring of 2002, however a videotape of highlights of the research is available to replace the live telepresence if someone chooses to use the print curriculum as a stand-alone. Online gated community feature would not correspond chronologically to your study if it were taught at a later date.
Detailed correlation to state and national standards in science, geography, math, English and technology along with performance-based assessment options for measuring progress make the program usable for a wide audience. From Shore to Sea, the 2002-2003 project, uses California's Channel Islands as a base to explore geologic and cultural history, the management of coastal ecosystems, and natural resources conservation.
|Kids In Creeks: A Creek Exploration and Restoration Program
||This program guide, created for grades 3-12 in the San Francisco Bay area, provides teachers with the relevant information to conduct a creek study program. Many options and details have already been explored by authors, e.g., a pre-arranged list of organizations willing to participate in the program, materials in the lending library, and list of creeks in the region that may be easily accessed by classes. There are "Action Projects" at the end of each activity for students to further get involved in their community.
|Kids Network - What's in Our Water?
||Curriculum package includes Teacher's Guide, Kid's Handbook,
Software Manual, and software for Apple IIGS. Computer and modem
are required. National Geographic Kids Network is a
telecommunications-based science curriculum. The water unit
emphasizes watershed studies. It is recommended for students in
grades 4_6, but would also interest older students. Some units
require relatively sophisticated skills which would seem more
appropriate for seventh grade and up. Unit support materials include access to Hot Line staff and a "unit scientist," a professional who communicates to the class via electronic mail. Planned sessions require a minimum 15 hours of class time during a six-week scheduled communications calendar. An unusual perspective of this curriculum is the idea that geographical and cultural qualities can influence water use. Extension activities provide opportunities for community studies and enable high quality experiential learning activities on many of the water topics emphasized in the classroom activities. This is also one of few curriculum to provide background for student understanding risk decisions by providing an activity which evaluates the text and concentration of pollutants.
|Lake Erie: The Great Lakes Project
||This K-12 curriculum is one component of the Great Lakes Project--Lake Erie, developed to improve environmental education in the Lake Erie watershed. Activities were contributed by teachers involved in the project as well as drawn from other resources, such as Project WILD and Project Learning Tree. K-6 lessons offer hands-on experience with concepts including habitat, the water cycle, watersheds, plants, soil, food webs, populations and community, and ecosystems. 7-12 curriculum covers limnology, chemistry, topography, and biology as well as a broad examination of environmental action skills.
|Land and Water
||'Land and Water' is a sequential curriculum consisting of 16 hands-on activities. Working in pairs or cooperative groups, students investigate the interactions between land and water by constructing and operating a simple stream table. During the course of the unit, students explore the components and properties of soils in relationship to soil and water movement; create and label "aerial" maps; investigate the role of ground cover and landscape topography, and; design and build dams. Concepts such as the water cycle, erosion, runoff, deposition, glacier formation and movement, and stream flow dynamics are discussed and/or explored. The curriculum concludes with an embedded assessment as students design and build a model landscape. Numerous activity extensions incorporating other disciplines both expand and diversify the curriculum.
|Leap Into Lakes. The Teacher's Manual for A Hands-on Exhibit About Lakes and Water Quality
||This teachers’ manual focuses on Wisconsin water features, e.g., glaciers,
groundwater, lakes, and wetlands. It accompanies a hands-on exhibit on
lakes and water quality issues located at the Madison’s Children’s Museum.
The ten sections cover primarily science activities and include background
information and answers to common questions.
|Learning to be Water Wise & Energy Efficient: An Education Program
||Designed for upper elementary and middle school students, the activities
help teach water and energy conservation. Students are asked to conduct
several activities at home with their families. The complete teaching
packet includes the teacher’s guide, an orientation video, conservation
supply kits, four posters, progress charts and checkup sheets. (Includes an 18-minute video designed for Grades 4-8. The conservation
supply kit:a showerhead, aerators, compact fluorescent bulb, etc.)
||Written for first through third graders, this activity guide centers on
science processes skills: observation, description, comparison,
classification, and written and drawn conclusions. There are five
activities in all - each with a "modifications for kindergarten." For a
quick reference, authors provide a step-by-step, Summary Outline for each
activity. The "Hands-on Science in the Classroom" section offers tips to
engage the science learning process.
|Living in Water: An Aquatic Science Curriculum
||Activities focus on a scientific study of water, aquatic
environments and the plants and animals that live in water. The
curriculum covers both marine and freshwater habitats. The emphasis
of the materials is on process rather than content. Unique aspects
include answer keys that are provided in language students would
likely use, and activities which teach students about describing
something they can't see by measuring it and correlating their
data. Many appendix materials are provided to facilitate ease of
teacher preparation/presentation (over 100 pages).
|Local Watershed Problem Studies - Elementary School Curriculum
||A collection of lessons written by teachers with a variety of
backgrounds. Lessons vary in degree of detail. Focus is on
interface between land use and water pollution. Includes
instructions on how to build water testing equipment. Provides many
stories and folklore examples to enhance student enjoyment of a
particular topic and to support language arts education goals.
Offers teaching suggestions for use with both lower and upper
elementary age students. The appendix includes suggestions for
citizen and government action in controlling non-point source
pollution in urban areas and rural areas, and a discussion on role
of values in environmental education.
|Local Watershed Problem Studies - Middle and High School
||A collection of lessons written by teachers with a variety of
backgrounds. Lessons vary in degree of detail. Focus is on
interface between land use and water quality. Contains unique
attitude survey form. Though developed for Wisconsin, simulation
activities could be adapted for other locales. Lessons typically
take from several days to several weeks of class meetings. Some
units are not directly related to water issues.
||While providing basic education about marine science, activities
focus on the local resource, the Santa Barbara Channel. Units
include physical characteristics of the channel, flora and fauna of
the channel, human history of the channel, and marine policy.
Materials were developed for a program predominantly reaching
low-income minority students who have limited access to special
programs. Activities are designed to increase self-esteem and
increase career awareness. Materials include an interesting
"invitation" activity that encourages development of group identity
and arouses student excitement. Activities provide a good interface
between school and nonformal settings. Appendices include
suggestions for marine careers, marine educational resources,
teaching sheltered English, and starting a marine education
program. Materials include extensive material on marine flora and
|Mapping Fish Habitats. Teacher's Guide. Grades 6-10
||Written for grades 6-10, students design an aquarium to draw
conclusions using basic scientific concepts: predicting, observing,
recording, experimenting, analyzing and interpreting. Students
also learn fundamental ecological concepts such as ecosystem,
habitat, home range, and territory. Through daily observations and
experiments, students draw conclusions about fish in their natural
environment. Experiments include changing one component of fish
habitat and mapping the fish's behavior based on the change.
|My World, My Water and Me! A Teachers Guide to Water Pollution Control
||Curriculum emphasizes how water gets polluted and the impacts of
pollutants on living things. It uses the arts extensively to convey
human uses and impacts. Activity directions do not always make the
connection between the specific activity and the overall objective
of the curriculum. However, background information is supplied to
enable the teacher to make the connections. Extension activities
sometimes have a significant role in developing understanding for
a particular concept. Materials use a unique strategy to tie all
the activity concepts together. Students write a story, in
sections, as the unit proceeds. The teacher or leader provides the
story outline, a trip through the waste water system by students
shrunk to one one-thousandth of their size. The students provide
details and adventures for each step. Materials do not indicate
which activities relate to which part of the story. Teachers will
need to select activities most relevant to the aspects of the water
pollution story they wish to emphasize.
|Nature of Water Power, The
||The Nature of Water Power is an curriculum for middle grades, guiding students to explore the scientific and social links between hydroelectric power and the environment. Developed by the Foundation for Water and Energy Education, it is intended for use in the northwestern United States. Through the study of properties of water, the water cycle, the physics of moving water, and electricity generation, students gain skills to explore the environmental impact of damming rivers for power production and to compare costs and benefits of hydropower to other energy sources. Activities utilize teamwork and employ journaling and other assessment techniques.
|Naturescope: Diving Into Oceans
||Instruction in these materials is provided in a unique layout that,
in several cases, could be used independently by the student.
Activity descriptions are clearly explained and illustrated. Topics
include the physical ocean, life in the ocean, life along the
coastline, and human impacts. Each topic includes an activity for
primary, intermediate, and advanced age ranges. Activities are not
dependent on each other. Materials include some beautiful drawings
of sea life. Excellent supplementary resource list.
|Naturescope: Wading Into Wetlands
||Instruction in these materials is provided in a unique layout that,
in several cases, could be used independently by the student.
Activity explanations are clearly explained and illustrated. Topics
include: what makes a wetland, saltwater wetlands, freshwater
wetlands, wetlands and people. Each topic includes an activity for
primary, intermediate, and advanced age ranges. Activities are not
dependent on each other. Excellent supplementary resource list.
|New Jersey 4-H Marine science Project. Leaders Guide
||Set in a club or in the classroom, this leaders’ guide helps teach about
the New Jersey marine environment. It is divided into five sections:
habitats, organisms, career exploration, community involvement, and
general. Each section consists of a set of activities. Construction of an
aquarium is one of the main projects. An annotated bibliography of
additional resources is included. Also includes pre- and post-evaluation
tests for learners.
|Oklahoma Aqua Times
||A 4-H project, this unit utilizes three components to further water conservation--a teacher's guide with activities, 4- to 8-page student newspapers (one for each of five units), and a video of young reporters interviewing water resources professionals. Topics include the hydrologic cycle, groundwater, water use, pollution and conservation. A culminating project is a student-produced newspaper communicating conservation concepts gained.
|Operation Water Drop
||This online resource for the study of drinking water quality "encourages students to develop critical thinking skills which will empower them to become actively involved in issues such as ensuring safe drinking water within their community, and on a global scale." Elementary teachers demonstrate tests on their community drinking water for alkalinity, color, chlorine, hetrotropic plate count, pH, ammonium and sulfate. High school students work in small groups to perform the above tests and also test for manganese, iron, nitrates, residual chlorine, total hardness and arsenic. Local water supply is compared to urban, rural and untreated water; and also with Canadian, US and European drinking water guidelines.
|Operation Water Flow
||Operation Water Flow gives teachers lessons in math, chemistry, biology, social studies and science in order to give students a more thorough understanding of issues surrounding drinking water, such as establishing the true cost of water, the social responsibilities of providing safe drinking water, the need for regulation, and the need for water conservation and source protection.
|Oregon Children’s Groundwater Festival: 1996 Teachers’ Guide
||Authors suggest conducting activities in this guide before visiting the
Oregon Children’s Groundwater Festival, and as a follow-up to reinforce
concepts. It is adaptable to all grade levels. The focus is
interdisciplinary, but with a strong emphasis on science of water
principles, through activities that use models and experiments. Other
topics addressed include the hydrological cycle, and water quality and
|Our Great Lakes Connection
||These materials were designed to enable the teacher to integrate
activities about the Great Lakes into a regular classroom program.
Ideas for the activities were provided by teachers and Great Lakes
specialists. Materials emphasize use and development of a variety
of learning skills. Activities focus on the historical/cultural
role of Great Lakes in people's lives. History, geography and
economics form the basis of the content, but materials include some
emphasis on pollution impacts and lake effects on weather and
||One of 3 packets designed as a supplement to the classroom. The
others are "Our Surface Water" and "The Water Around Us." Uses
demonstrations to convey four main ideas about groundwater.
|Our Surface Water
||One of 3 packets designed as a supplement to the classroom. The
others are "Our Groundwater" and "The Water Around Us". Provides
directions for a pond and a stream field trip and instructions on
how to conduct a water quality survey.
|Paddle-to-the-Sea: Supplemental Curriculum Activities (Holling Clancy Holling's Paddle-to-the-Sea)
||Developed for use in 3-6th grades, this interdisciplinary
curriculum is designed to reinforce the concepts introduced in the
story, Paddle-to-the-Sea. Activities center around topics
pertinent to the Great Lakes region such as surrounding land use,
historical uses of the Lakes, and ecology of the Great Lakes. Most
activities are pencil/paper and seatwork-oriented.
||This descriptive curriculum presents activities designed for Earth Science
teachers for middle school-aged youth. Activities center around three key
concepts: the investigation of water and its properties; the forces that
affect water’s movement on the earth; and the human impact on the ocean;
with emphasis on the physical and chemical properties of water, and little
on ecology and environmental concepts and issues. Each activity has a
student’s section and a teacher’s guide with background information,
procedure and questions. A set of readings follow each activity that used
to enhance teacher preparation, or as further resources for students.
|Plastic Eliminators: Protecting California Shorelines
||One in a series of five, this activity guide aims at increasing
awareness of plastic marine debris in 10-15 year old youth. The
first portion of the guide focuses on awareness, while the
remaining activities deal with taking action in the youth's
community. Activities culminate into an Adopt-A-Beach and
Cleanup, but after youth have learned how plastics can effect
marine animal life and actions youth can do to reduce plastic
|Pondwater Tour, The
||Students are encouraged to practice science investigation skills, i.e.,
discover, examine, and experiment with chemical properties of water. The
Tour includes a test kit and worksheets for the hands-on investigation of
a water sample collected from a pond, lake, stream or river.
|POW! The Planning of Wetlands: An Educator's Guide
||POW! The Planning of Wetlands is a two-part guide to creating a schoolyard wetland. Part I, Background Information, is a mini-course in wetlands construction, offering detailed information on water supply, permits, design, grading, specifications, construction, maintainance, cost estimates and a botanical guide to 40 native wetland plants. Part II contains 25 activities that involve students in grades 5 to 12 in the process of wetland development. Some modifications are offered for K-4 students.
|Project W.U.L.P. (Wetland Understanding Leading to Protection)
||This multidisciplinary wetland unit is designed for middle school-
aged students. Activities begin with general knowledge of wetland
functions and human impacts, then proceed to comprehensive, well-
thoughout field activities for students. Some activities are
specific to Wisconsin wetlands. Authors attempt to pull together
a complete wetland unit to be taught entirely in the classroom or
classroom and field experiences. Unit includes an extensive,
multimedia wetland resource list.
|Project Water Works
||Requires classroom setting and computer. Extensive preparation by
instructor needed. Emphasis on water science and water management.
Water management section of software emphasizes importance of
values in decision-making, yet identifies "right and wrong" answers
to simulated water management scenarios.
|Project WET Curriculum & Activity Guide
||A compilation of over 80 water-related activities, 'Project WET' is organized into seven units: 1)Water has unique physical and chemical characteristics, 2)Water is essential for all life to exist, 3)Water connects all Earth systems, 4)Water is a natural resource, 5)Water resources are managed, 6)Water resources exist within social constructs, and 7)Water resources exist within cultural constructs. Within each unit, activities are designed to accommodate different learning styles and multiple intelligences, in addition to incorporating many disciplines -art, science, math, language arts, social studies, and music. Activity format includes a suggested grade level, teaser introductory question, summanry, objectives, materials lists, making connections- describing the relevance and rationale for the activity, background information, procedure, assessment strategies, extensions, and resource list. Students explore and expand their knowledge, feelings and values related to water as they compare past and present water user; explore issues of water availability in different cultures; classify wetland soil types; interpret maps to assess changes in a watershed; investigate the source of groundwater pollution; monitor personal water use; develop strategies to clean wastewater, and; discuss/debate management strategies.
|Protecting Our Watersheds
||"Protecting Our Watersheds," a middle school science and civics unit, results in cooperative community action. Students evaluate their local watershed through observation, and data collection to identitfy water quality issues. Detailed, process-focused lessons lead students to research policy and practices impacting these issues, to select a problem, and develop an action plan to effect long-term improvement.
Cooperative Experiential activities are centered with "reflection questions" at the conclusion of each lesson. "To increase youth Voice" offers leadership opportunities in each lesson. Includes Facilitators guide, activity notebook, tip cards, 4 posters, totebag. Additional resources available such as CD-roms examining Upper Mississippi watershed and introducing water monitoring, field manual and kits for water monitoring, booklet of water quality issues for debate, sourcebook, case study, etc.
|Pure Tap: Adventures in Water
||Pure Tap: Adventures in Water is a publication of the Louisville Water Co. It presents multi-disciplinary activities for 3rd to 5th grades on the water cycle, water use, treatment and delivery of drinking water. Most lessons are specific to the Louisville vicinity and system.
||This update of a 1989 GEMS (Great Explortions in Math and Science) curriculum has been been revised to emphasize key environmental issues, to align activities with National Science Education Standards, and to lead to unified concepts in Earth and Environmental Science--both in the scale of geologic time and the impact of humans and technology on natural resources. Using river models of diatomaceous earth (the new version offers alternatives to this medium) and a dripper system, students explore rivers as earth shapers, simulate geologic timelines, and experience how human activities (dams and toxic waste dumps) impact natural systems. The unit offers multiple assessment suggestions, literature connections, and excellent detailed directions to help instructors maximize the value of the lessons.
|River, The: Humanities
||This unit is the humanities strand of a 3-part curriculum about the Rio Grande. The other two strands, dealing with science and social studies, are profiled individually in this database. The curriculum is no longer in print, but the New Mexico Culture Net makes it available to download at their website: www.nmculturenet.org/riverproject. Through exhibits and activities, students express their personal experience of the river, investigate physical characteristics of the river, glean an understanding from legends and oral histories of the people and cultures of the Rio Grande, and view and respond to the work of visual artists inspired by it.
|River, The: Science
||This unit is the science component of The River: A Middle School Multi-disciplinary Curriculum for the Rio Grande. Used in conjunction with the social science and humanities strands, the curriculum's goal is to prepare students to understand the consequences of their actions and to participate in community decision-making. The science strand covers the distribution and use of water, river systems, ecosystems and explores problems confronting the Rio Grande and issues of sustainability.
|River: Social Studies, The
Social Studies', 144, 'The social studies component of "The River: Inter-Disciplinary Curriculum for The Rio Grande"
challenges students to analyze data, explore their personal values, and evaluate the ecological health and uses of the Rio
Grade in relationship to the socio-cultural history and dynamics of the area. Less familiar concepts addressed include:
1)Historical use of water, 2)New Mexico Water Law, and 3)the importance of the "bosquee" (riparian area). An in-depth
concluding component of the curriculum is "The River Simulation" as students identigy personal and community interest in
the River, analyze interest groups, explore regulations of river usage, and identify problems and develop an action plan to
foster sustainability. Slides of the river/watershed are included with the curriculum booklet.
|Rivers and Ponds
||Rivers and Ponds is a whole language thematic unit incorporating four children's literature selections: All Eyes on the Ponds; Frog and Toad Together; Look Closer: Pond Life; and Look Closer: River Life. Included in the curriculum are interdisciplinary activity extensions for teacher facilitation of each book and accompanying lessons. Working cooperatively, students: Collect and study the macro-invertebrates in a pond; Investigate the water cycle and surface tension; Construct a classroom pond, underwater pond scope, pond chain mobile, and props and costumes for the performance of Pond Readers' Theater; Write a frog and toad mini-book, a recipe for friendship, and water poetry. Measure the distance a frog travels, graph their favorite pond animal, and calculate pond problems.
||Rivers: Biology is one component of the 6-unit Rivers Project, the others addressing chemistry, earth science, geography, language arts and math. Useful as a free-standing unit or in conjunction with the other subject areas, this curriculum helps high school students understand the biological factors that indicate or are influenced by water quality in rivers. Students collect and test water, observe biological diversity in the field, and simluate the activities of a project development team and a government review team over proposed changes to the river studied.
||Rivers: Chemistry is one component of the 6-unit Rivers Project, the others addressing biology, earth science, geography, language arts and math. Workable as a unit of study for high school chemistry, this guide is an effective component of a cross-curricular thematic river study. The activities lead students to discover what variables comprise and determine water quality by field sampling and analyzing test results to determine overall water quality.
|Rivers: Earth Science
||Rivers: Earth Science is one component of the 6-unit Rivers Project, the other units address chemistry, biology, geography, mathematics and language arts. This unit is built upon hydrological assessment of river or stream ecosystems. Students learn how climate, geology, and society affect water quality. Students build earth science knowledge and field skills while using cartography, meteorology, and geology to investigate natural and human influences on rivers.
||Rivers: Geography is one unit of the 6-volume Rivers Project. The other units are Chemistry, Biology, Earth Science, Mathematics and Language Arts. This curriculum will help students understand the relationships among people, places, and environments and the interactions that occur on local, regional and global scales. Students explore a historical perspective of both the physical geography and the human development of an area river. Role play of environmental decision-making is a strong culminating activity.
|Rivers: Language Arts
||Rivers: Language Arts in one component of a 6-unit River Project, the others dealing with chemistry, biology, earth science, geography and mathematics. This unit is particularly useful in conjunction with any of the others as it focuses on important communication skills for high school students studying the environment. Students develop skills in journalistic, expressive and scientific technical writing, they make oral presentations, practice interviewing and historical research techniques and write political letters. The Rivers Project maintains a website at www.siue.edu/OSME/river where water quality data collected in other units is entered for use of others, student writing is shared and other materials are available.
||Rivers: Mathematics is a part of the Rivers Project, which also includes Chemistry, Biology, Earth Science, Geography and Language Arts. This unit achieves the goal of the National Council of Teachers of Mathematics that "instruction should be developed for problem-solving situations' using actual stream study. Skills needed to perform tests, make observations, analyze and present data are emphasized. Pre- and post-tests are included in each lesson. The mathematical concepts are reviewed and practiced within the context of stream study, then applied to real life data collected by students in field situations. They monitor changes in river levels, explore water use and estimate quantities, clean a river or stream area and analyze debris data, and test water quality and use statistics to infer impact on overall stream health.
|Sea Sampler: Aquatic Activities for the Field and Classroom
||Elementary, Grades K-6. Curriculum addresses a variety of science
and ecological topics, e.g., salt water characteristics, osmosis,
food web, niche and communities. There are 7 field and 14 classroom
activities. Detailed background information is not provided for
teacher or student; sources are listed where to find the necessary
Secondary, Grades 7-12 (separate edition). Similar activities as
the elementary edition addressing similar topics relating to
coastal/salt water living. This curriculum deals with more
integrated skills and concepts, e.g.,taxonomy, food web/energy
|Sense of Water, A - Elementary edition
||Materials provide a set of short activities which can be integrated
into a variety of disciplines and grade levels. Activities are
organized according to sections, including: dependency of life on
water, the science of water including water ecology, climate, water
distribution and use, pollution potential of water, and the role of
water in culture. Each lesson is indexed by chapter reference,
grade, subject, length of activity, concept, key vocabulary and
credits. Includes suggestions for evaluation, subject and topic
index. A unique perspective includes activities which address the
concept that water of varying degrees of contamination may have
uses other than drinking.
|Sensing the Sea - (K-1) & (2-3) (2 Booklets)
||Activities center around set-up and care of saltwater aquarium.
Focuses on process skills of investigation (especially observation
and hypothesis). Unique aspects include use of the skill of
questioning (unusual), mostly through teacher example and the use
of divergent questions for which student proposes possible
solutions rather than decidedly "correct" answers. Book 2 teaches
difference between observation and inference.
|Significance of Soil
||Significance of Soil is a primary component of the Adopt-A-Watershed K-12 science curriculum. Activity-based lessons present concepts, which are observed and/or applied in field situations; and culminate in a soil conservation action project and the creation of an informative brochure. Masters for Student Soil Saver Booklets and transparencies are included. Detailed Materials/Equipment sections and Advanced Preparation checklists make complex lessons manageable.
|Sourcebook for Watershed Education
||Activities revolve around two areas: watershed and water quality
monitoring, and understanding changes and trends withing the whole
watershed. The manual is divided into two parts: 1) the first provides a
framework and strategy for coordinators developing a watershed program
network. It includes topics such as budget construction, program goals and
identification, and community participation and networking; 2) the second
part focuses on educators and includes a section on educational
philosophies, examples of curriculum matrices and models for
interdisciplinary education, and examples of units, lessons, and
activities designed by GREEN participants across the United States.
|SPLASH Stormwater Pollution: Learn and Share
||This K-8 curriculum was developed by the City of Eugene Public Works Stormwater Management Program to build a community of responsible water users with emphasis on their untreated stormwater. Primary lessons highlight the water cycle, city water systems, personal water use, and the impact of pollution on plant and animal life. Intermediate sections focus on local ecosystems and community issues. The middle school curriculum examines the role of human use in stormwater, wetlands and the Eugene area watershed. The curriculum is available online, but relies on an accompanying kit from the Stormwater Management Program with student materials and worksheets.
|Splash: Water Resource Education
||SPLASH is a set of resources and activities for middle school classrooms that promote protection of water resources. Originally produced as a packet of activities and fact sheets by the Southwest Florida Water Management District, the materials are available online or in print format. Activities range from building hydrologic cycle and wetland models, and constructing a solar-powered desalination plant to brainstorming potential future sources for drinking water and designing a SW Florida seaside community.
|Stop, Look and Learn About Our Natural World Vol. 1
||Only lessons specifically related to water resources are included
in this survey; thus it covers only Unit 2 of Volume 1 (27 of 216
pages). Other units cover soil, plant, tree, and wildlife
conservation. Materials were developed with a resource conservation
orientation. Worksheet instructions may be too advanced to be read
independently by some K-2 students. Many activities combine content
and study skills. Includes guide that references activities
according to subject area, skill, page number, and topic.
|Stop, Look and Learn About Our Natural World Vol. 2
||This survey reviewed only material in Water Conservation Unit (49
pages). Other units in this 244-page booklet include soil, plant,
tree and wildlife conservation. Materials were developed with a
resource conservation orientation. Worksheet language may be too
advanced to be read independently by some 3rd and 4th graders.
Additionally, some 3rd and 4th graders may not have the math skills
to complete or understand computations included in the materials.
Many activities combine content and study skills. Includes guide
that references activities according to subject area, skill, page
number, and topic.
|Stop, Look and Learn About Our Natural World Vol. 3
||Reviewed unit on water conservation. Forty-four of book's 215 pages
devoted specifically to water conservation. See comments about
Volumes 1 and 2.
|Story of Drinking Water, The
||Comic book about a variety of water issues is provided in English,
Spanish and French. The Teacher's Guide includes 19 activities to
provide hands-on experiences with topics mentioned in the comic
book. Intended for classroom application. Excellent focus on plight
of third world countries, i.e., water supply.
|Stream Scene: Watersheds, Wildlife and People, The
||One of few curriculum, if any, focusing on riparian areas and
intermittent streams. Only curriculum reviewed that studies the
effect of stream flow (water quantity) on plant communities. One of
few to approach populations with strong mathematical orientation.
Includes appendices on making field equipment; a description of the
salmon-trout enhancement program; general stream survey terms;
water resource agencies. Includes science background for
instructors and activities for students on any particular topic.
Material likely too advanced for middle school students without
|Stream Study and Water Quality Assessment Curriculum
||Written for grades 5-8, this curriculum focuses on stream ecology,
e.g., physical, biological and chemical monitoring. Curriculum
also addresses urban sources of water pollution and watershed
concepts. An "Outline of Advanced Concepts and Activities for
Stream Ecology and Monitoring" included, although, the material
provided in this guide may not sufficient for educator to carry
out. Instructor may have to refer to the supplemental sources for
detailed background information. The supplemental materials
available: Interpreting Results of Water Quality Tests in Streams
and Rivers. 1991. Frank Mitchell and Jeffery Schloss; and A Study
Guide to New England's Freshwater Wetlands. 1991.
||STREAMS--Science Teams in Rural Environments for Aquatic Management Studies, is an online curriculum for rural middle schools focusing on water resources and environmental stewardship. Using the Muddy Run Watershed of Huntingdon, PA for field study, this guide offers lesson outlines for collecting, analyzing and interpreting data along with identifying and formulating solutions to problems. Lessons present student objectives, procedures specific to the local area and assessment options. Handouts, worksheets and assessment tools are suggested from other water curricula, such as Aquatic Project WILD and Project WET, or must be teacher developed.
|Streamside Community, The
||'The Streamside Community' is one of the few curriculum evaluated focusing on the identification and study of a riparian zone. During the course of this interdisciplinary curriculum, students observe, investigate, and inventory the plants and animals in a riparian ecosystem; learn about seed dispersal adaptations; and initiate a long-term amphibian population study and restoration project. (It is suggested that the teacher seeks assistance from a natural resource professional or botanist for plant identification.) The curriculum identifies and explores ecological concepts such as species, niche, indicator species, food webs, communities, and ecosystems. Throughout the curriculum, the concepts of interactions and interdependence within a community are emphasized. This evaluation includes the teacher's guide only; additional materials and resources are available through purchase of the classroom kit.
|Streets to Streams: Youth Investigations into Water Quality
||The purpose is to educate 5-9th grade youth on surface water and ways to protect it. Suggested activities include a water festival and storm drain stenciling projects. The guide lacks pictures and graphics to illustrate key points. Also available, a 12-minute
video on storm drain stenciling, "Dump No Waste, Drains to Stream."
|Summary for Teacher's Guide to World Resources Watershed Pollution
||The Watershed Pollution guide is part of a series that contains a
lesson plan, student handouts, overheads, and student enrichment
activities. Authors suggest how to integrate global environmental
education into high school curricula through the national Goals
2000: Draft National Performance Standards. Activities focus on
events that happen in a watershed. The guide presents perspectives
of developing and developed countries in water use, water,
pollution and watershed dynamics. Authors included a chart for
ideas referencing lesson plans and enrichment activities across
geopgraphy, math, science, civics, government, and history. To get
the most out of Oceans and Coasts, students should have an
introduction to ocean ecology and uses; discussions require
background for both teacher and student. Other units in the series
include: Watershed Pollution; Oceans and Coasts; Biodiversity;
Sustainable Development; Natural Resource Economics; Population,
Poverty, and Land Degradation; Energy, Atmosphere, and Climate; and
||Teacher's Guide provides background information and activities to
complement the student video. Student Guide provides additional
information about the water cycle, sources of water pollution,
wastewater treatment, and citizen action. Materials address the
concept of natural pollution, which is rather unique.
|Tapwater Tour, The
||Activities enable students to test tap water and evaluate the water
quality. Highly directive teacher materials, script provided.
|Teacher's Guide to World Resources: Oceans and Coasts
||Oceans and Coasts encourages high school students explore the
sources and effects of marine pollution, and steps taken to
minimize these effects. Subtopics include role of oceans,
pollution types, and fisheries. The unit format encourages teachers
and students to engage in thoughtful discussion of oceans.
Students receive fact sheets, maps, graphs and articles.
Enrichment activities suggest that students map ocean pollution,
examine aquaculture, investigate bioremediation and examine land
use issues. The Audiovisual Resource list and Further Reading list
provide additional background and better understanding of ocean and
coastal issues. To get the most out of this unit, students should
have an introduction to ocean ecology and uses; discussions require
background for both teacher and student. Others in the series
include: Watershed Pollution; Oceans and Coasts; Biodiversity;
Sustainable Development; Natural Resource Economics; Population,
Poverty, and Land Degradation; Energy, Atmosphere, and Climate; and
|Teaching Aquifer Protection: ("TAP notebook"): A Curriculum Supplement
||Provides activities designed as a curriculum supplement. Focuses on
water quality protection and water conservation. Learning
objectives are referenced to state basic science skills for easy
interface with school curriculum. Written for South Carolina
audience, but more broadly applicable.
|That Magnificent Ground Water Connection: A Resource Book for Grades 6-8
||Two complete groundwater resource books are now available for teachers: one for grades K-6 and the other for grades 7-12. Both editions include selected groundwater-related activities adapted from available curricula. Incorporating the groundwater theme into science, stories, songs, math, social studies, art, and writing makes the resource books applicable over a range of subjects. The activities focus on groundwater issues in New England. Presenting the information with a New England spin teaches students about the region’s geologic and hydrologic idiosyncrasies and how groundwater and the water cycle function locally. Recognizing today’s children as tomorrow’s leaders, the curricula challenges students to think, sort out facts, brainstorm, experiment, and learn.
|That Magnificent Ground Water Connection: A Resource Book for Grades K-6
||Written for grades K-6 in the New England region, the curriculum deals
with groundwater issues through interdisciplinary activities on water
properties, the water cycle, groundwater, water distribution and
treatment, and water stewardship. It encourages students to apply their
learning toward citizen involvement and action. Authors provide thorough
background information and detailed activity instructions. The curriculum
contains examples specific to this region, but the core information and
activities are general and are broadly applicable.
|Through the Looking Glass. Teachers' Guide.
||Curriculum focuses on marine awareness for elementary and high
school students through a field trip to the Nature Center at
Odiorne State Park, Rye, NH. Pre and post field trip activities
compliment and expand the concepts experienced during the trip.
Strong emphasis to incorporate activities into the standard
curriculum. Little to no background provided for teachers or
students on follow-up activities; only suggestions to integrate
marine awareness into the curriculum.
|Wade into Watersheds
||Wade into Watersheds is an intermediate component of the Adopt-A-Watershed K-12 science curriculum. Activity-based lessons present concepts, which are observed and/or applied in field situations; and culminate in a water quality action project. Many lessons refer to projects in 6 resource books, which are included in the curriculum purchase. [NOT ALL RESOURCE BOOKLETS WERE REVIEWED.] Detailed Materials/Equipment sections and Advanced Preparation checklists are helpful.
|Water Action Volunteers (WAV): Introductory, hands-on stream and river action projects for Wisconsin
||WAV is a collection of activities for youth leaders to select hands-on,
action-oriented projects for volunteer groups and classrooms. All
activities are adaptable to different age levels. The eight projects teach
about stream and river resources in Wisconsin, focusing especially on
community collaborative efforts to address pollution issues.
|Water Activities: Teaching Environmental Responsibility
||This publication of the Miami (Ohio) Soil and Water Conservation District is a compilation of activities adapted from other sources and narrative background information. Materials address water, pollution and wetlands. A number of simulation games are included. It lacks organization for age level and consistency of format.
|Water Around Us, The
||One of 3 packets designed as a supplement to the classroom. The
others are "Our Groundwater" and "The Water Around Us". Provides
directions for demonstrations and activities about the water cycle
and water conservation.
|Water Conservation In-School Curriculum
||Water education activities designed for easy integration into class activities. Binder separates materials by grade. Each unit contains list of activities and materials needed, separated by day. When conducting activities, teacher borrows box of equipment from the Cooperative Extension office. Goals and objectives not stated for each activity specifically, but for the nit overall. Many of same concepts presented at each grade level (especially grades 1 and 2). Grade 4 examines climate effects_not usual part of most water curriculum. Grade 5 curriculum emphasizes soil and erosion. Includes suggestion for activities for science fairs and an environmental education packet from the Garden Club of America. Reading level and concepts may be too advanced for suggested grade levels.
|Water Conservation: Environmental Action
||Water Conservation: Environment Action--Analyze, Consider options, Take action, In Our Neighborhoods is one component of a 6-module curriculum developed by E2: Environment & Education, that develops issue investigation and action skills as a prerequisite to environmentally responsible citizenship. In this module, students study hydrologic principles, pollution, water treatment, and water uses. They evaluate water quality and consumption at their school, analyze and interpret their data, develop alternate conservation plans, which they then critique through a cost/benefit anaylsis. They present a proposal on conservation to school authorities for consideration. In 'Environmental Action, Water Conservation,' students use the school environment to investigate and analyze water conservation issues in a cooperative learning environment. Activities progress from a traditional teacher-directed classroom format to a student-directed environment with teacher as facilitator. In this curriculum, students explore the different uses of water and the ways in which it can be conserved; conduct a school water audit; research proposed conservation strategies, and; present recommendations to the school administration or environmental committee. Completion of the curriculum requires eighteen through twenty, 50-55 minute classroom sessions. 'Environmental Action, Water Conservation' is one of six environmental education modules within the E2: Environment & Education program-each designed to stand alone or in conjunction with one another.
|Water in Your Hands
||Curriculum consists of a comic book-style story about water with 4
accompanying activities. Relies on "learning cycle strategy:
exploration, concept development, and application." Suggests unique
educational strategy of using journals for notes, reflections, and
sharing them as parts of activities. Includes resource list for
both students and teachers.
|Water Inspectors: Examining H2O
||One of five CASEC guides written for 10-15-year-olds. This activity booklet focuses on the
physical characteristics of water;e.g., salinity, temperature, taste, hardness and clarity.
Activities are designed to engage students in scientific testing methods, including making
predictions and manipulating variables one at a time to determine which variables cause
||Water Magic can be used separately or as a complement to Splash!
Activity Book. The 23 activities cover a range of water science,
water issues, and water topics in our culture. Activities are
varied and age appropriate. Most are appropriate for both the
classroom and nonformal settings. Some activities do not relate
well to stated objective. Illustrations and activity about
groundwater may lead to a misunderstanding of groundwater and
|Water Politics: A Water Education Program for High Schools
||Designed for 9-12 grade youth, this curriculum emphasizes water use
and water conflict issues. Covers such issues as conlficts among
urban, agricultural and environmental interests; water conservation
vs. developing new supplies, including the public participation
component. Uses case studies on water rights, canal building,
landfill development, protecting reservoir quality, risks and water
quality; water transfer, and the affect of the media on public
opinion, use of the Colorado River, and saving endangered species.
Some case studies seem biased in favor of development; do not
present the ecological impact of decisions on either side. Sways
students and teachers towards certain conclusions. Includes a map
of California aquaducts, "California Water Resources," and the
California Water Story, a video. Teacher background materials are
|Water Precious Water, Book A
||One of several publications from, Activities to Integrate Math and
Science (AIMS) in the grades 2 - 6 series. Limited duplication
rights are granted with purchase of materials. Math activities
often rely on an understanding of multiplication, division and
percentages. Some activities are provided in both a low math
(visual) and high math (multiplication/division) format. Water
activities are related to other curriculum areas through
"curriculum coordinates" which provide suggested activities for
language arts, social studies, and the arts. Predicting, measuring,
calculating, estimating and data collection and analysis skills are
||Water Quality is a high school component of the Adopt-A-Watershed K-12 science curriculum. The student-directed learning in this unit of study commences with a field trip during which students make observations and initiate inquiry about water quality. They engage in a simulation that reveals the complexity of water quality issues and encourages them to consider multiple perspectives of water and land use, as they clarify their personal beliefs. They research the water quality issue they identified, then collect data about the field site in preparation for a water quality improvement project. These student-directed research lessons are correlated to the 6-part Rivers Project curriculum as instructional guides for chemistry, biology, earth science, geography, math, and language arts. The unit culminates with a school-wide Watershed Fair.
|Water Quality: Critical Issues/Critical Thinking Experience
||This 4-H Leader Guide presents four activities that promote awareness of water quality and utilize problem-solving techniques to address water quality issues. Simulations, an art activity, and discussion focus on how conflicting human interests impact water quality, supply, land use decisions and protection issues.
|Water Quality: A Water Education Program
||Focuses on water quality as it applies to a public water supply
system. Includes text plus two activities.
|Water Quality; Water Highways; Water Trade-offs
||Water education activities designed for easy integration into class activities. Binder
seperates material by grade. Each unit contains lists of activities and materials needed,
seperated by day. When conducting activities, the teacher borrows box of equipment from
the Cooperative Extension Office. Goals and objectives not stated for each activity specifically,
but for the unit overall. Many of same concepts presented at each grade level (especially
grades 1 and 2). Grade 4 examines climate effects- not usual part of most water curriculum.
Grade 5 curriculum emphasizes soil and erosion. Includes suggestion for activities for science
fairs and an environmental education packet from the Garden Club of America. Reading level
and concepts may be too advanced for suggested grade levels.
|Water Res. Professional's Outreach Notebook: Ground Water
||This publication was developed for educational outreach. It provides a mechanism whereby an individual employed in a scientific fields associated with water resources assists an instructor (school teacher or youth group leader) in presenting information on selected groundwater topics. The materials require an instructor and water resources professional to work together. It is divided into two sections, one for an instructor and one for the water resources professional. Five lessons are included: aquifer, porosity, permeability, wells and calculations. The document is currently only available online, not as hard copy.
|Water Resource Education: Water You Can Make A Difference (K-3)
||Binder contains K - 3 kit and materials for grades 4 - 6. It is not
immediately clear which materials are for teachers and which for
students. K - 3 activities cover the significance of water, the
water cycle, information about the New York water supply, and
hazardous household products. Materials for grades 4 - 6 include
importance of water, the water cycle, water supply, water
contamination, and water conservation.
|Water Resource Education: Youth Education Curricula
||See notes for K - 3 version. This set contains some materials first
developed for WET (North Dakota). The curriculum correlates with NY
state syllabus-elementary science level III, Ecosystems. Reading
level may be too advanced for 4-6 graders.
||Nebraska's is reviewed since the Nebraska materials pioneered this
approach. Unique approach includes videos that introduce each of
5 units and an accompanying "newspaper" with more information and
activities for youth. Teacher packet provides guidance on how to
use the material. Other unusual aspects include suggestions for
review activities and activities to teach interviewing skills.
Incorporates study skills. Indiana and Missouri also have a Water
|Water Sourcebook: Classroom Activities for Grades 9-12
||Developed by Auburn University at Montgomery and Troy State University, this curriculum features hands-on activities which build knowledge and skills to assess water quality and the factors which influence it. The scope of topics is broad and student-focused investigations successfully address riparian ownership and water rights, mining and forestry practices, risk assessment, international water disputes, and the financial aspects of our environmental infrastructure along with many other issues.
|Water Sourcebook: A Series of Classroom Activities
||Developed as a supplement to a school water education unit, each Water Sourcebook is divided into six chapters: Introduction to Water, Drinking Water and Wastewater Treatment, Groundwater Resources, Surface Water Resources, and Wetlands/Coastal. Chapters are correlated with math, science, language arts, social studies and related arts curriculum goals. Each activity within a chapter includes (1)background information, (2)objectives, (3)subject(s), (4)time allotment, (5)materials list, (6)advance preparation, (7)procedure, and (8)resources. A resource section, fact sheets, and a glossary are included at the end of each sourcebook.
|Water Sourcebook: A Series of Classroom Activities for Grades 3-5
||Written by Tennessee Vally Authority, this curriculum set serves as a supplement to a school
water education unit. Water Sourcebooks are available in a scope and sequence format: K-2, 3-5,
6-8, and 9-12. Each Sourcebook provides the same 6 chapters: Introduction; Drinking Water and Waste
Water Treatment;Groundwater, Surface Water; Wetlands; and Coastal Waters. Chapters are correlated
with math, science, language arts, social studies, and related arts curriculum goals. An important
resource provided by this curriculum is a set of brief background act sheets on 29 water-related topics.
|Water Sourcebook: A Series of Classroom Activities for Grades K-2
||Developmentally appropriate activities introduce primary students to the science of water, the importance of clean drinking water, environmental impacts on surface water, groundwater and contamination, and the importance of wetlands in this curriculum guide developed by the Water Environment Federation in conjunction with EPA. Classroom teachers will appreciate the skills that students acquire in the lessons, such as estimation, measurement, graphing, prediction, and reporting data. Of particular note is the effective use of children's literature in many lessons.
|Water Watcher: Official Resource Manual
||This primary curriculum is built on the Purdue three-stage enrichment model, teaching basic material and presenting group activities that promote the concepts of protection and management of Florida water supplies. It does not offer suggestions for independent projects, the third component of the model. Music is used throughout the unit to present and reinforce concepts. Topics addressed include Florida geography, water sources, salinity, aquatic wildlife, the hydrologic cycle, erosion, acid rain, water treatment, conservation and water-related careers.
||Curriculum aims to improve understanding of personal water
conservation practices which will improve water conservation. Uses
water science kit and videos to complement written materials.
Instructor materials do not include a separate listing of what
materials will be needed when or what is included in the science
kit. Provides a science and social studies alternative for most
lessons. "Water Wizards" is the companion curriculum for grades
|Water Watchers: Conserving Water at Your School and Home
||This water audit handbook was developed to support water stewardship projects of classrooms involved in the TEAM WET Schools Program. It offers all teachers hands-on water conservation investigations that foster personal responsibility and stewardship of the urban water environment. It presents activities to explore issues, analyze water use, consider conservation options, and take action to effect positive change both in the school and student home environments.
|Water Wisdom: A Curriculum Guide for Grades 4 through 8
||This curriculum is a supplement to the California State Environmental Education Guide, consisting of three units: Water Nurturing Nature, Water Rights and Responsibilities, and Water Symbolism. The units highlight science, social studies and language arts concepts. Lessons focus on the importance of water to all biological systems; examine "ownership" and responsibility regarding water use and distribution; and explore the thematic and symbolic role of water in myths and folklore of various cultures.
||For use in 5-6th grade classrooms. Activities focus on the water
cycle, the aquatic environment, and the causes, effects, and
prevention of water pollution. Provides elementary science syllabus
chart which correlates water activities with elementary science
||Water delivery system and conservation emphasis. Excellent support
material, instructions and diagrams for instructor. "Water
Watchers" is the companion curriculum for grades 7-8.
||These materials were designed to be used in a 4-H club setting. The
folder provides leader and member guides, activity fact sheets and
record keeping sheets. Basic focus is to give youth opportunities
to explore and observe aquatic environments. Collection/sampling
section includes tips on minimal impact sampling_a nice touch.
Water careers is included as a suggestion to invite as guest
lecturers people whose careers involve water. Reading material may
be too advanced for the young end of the suggested age range.
|Water, Water Everywhere
||Includes teacher's guide to laboratory and field testing of water
for a variety of parameters supplemented by a separate student text
and teacher resource manual. One of few (if any) curricula to
address radioactive waste. One of few curricula to address concept
of how risk decisions are made in the water quality reference unit
booklet. Includes homework activities.
|Water, Water Everywhere, But.. Where's Everywhere?
||Although developed specifically for grades 5 through 9, activities can be
adapted for K-12th grade students. The booklet is divided into three
sections: a general lesson outline for each unit; background information
in a series of short articles; and, ‘criteria checklists’ to guide and
evaulate student learning. The five units are estimated to take from 5 to
10 days to complete. Activities are primarily instructor-led readings and
discussions. The guide highlights international water issues in the United
States and Africa; Tanzania, in particular.
||A 'Teacher's Guide' and 'Youth Activity Worksheets' publication designed to be used in conjunction with each counties 'Watershed Connections' publication in Indiana. Activities include: Watersheds of Indiana, River Discharge; Floods, Floodplains, and Flood Probabilities; Understanding Ground Water Flow; Your Drinking Water; Comparative Ground Water Vulnerability; Pollution Sources; Water Resource Terms; and Web Search.
|Watershed Science for Educators
||Designed as a watershed monitoring resource packet, this curriculum can be incorporated into formal and non-formal education settings. Students will learn to: (1)read topographic maps, (2)interpret aerial photographs, (3)predict potential water quality impacts, (4)identify aquatic invertebrates, (5)calculate water quality indexes, (6)conduct water chemistry tests, (6) measure and record physical measurements of a waterway, and (7)organize and interpret data. The curriculum includes background information, activities, and assessments.
|Watershed to Bay: A Raindrop Journey. A Critical and Creative Thinking Approach to Understanding Coastal Watershed Systems.
||Written for 4th-8th grade youth living in watersheds along the
Massachussets coast. Activities are designed to help learners develop
critical thinking and investigation skills and an understand of basic
science concepts about watersheds, estuaries and groundwater systems. This
is accomplished through stories, models, experiments and observation. It
also includes a teaching kit for $115.00 and includes the curriculum guide
and a complete supplies kit.
|Ways of the Watersheds, The: An Educator's Guide to the Env. & Cultural Dynamics of NY City's Water Supplies
||A curriculum guide exploring the environmental and cultural dynamics surrounding New York City's watersheds. Units cover the hydrology, geology, and ecology of watersheds; pollution; development; technology; and conservation within the watershed.
|We Depend of Illinois (formerly Water: The Liquid of Life)
||Water education materials for use in fifth grade classrooms.
Materials emphasize text, with some supportive activities. The six
modules include: earth as a closed system, the relationship of
water to life, the hydrologic cycle, wastewater treatment, water
protection, water testing and treatment, and lakes. Poster
|Wet and Wild Water
||Written for a broad audience (K-12), activities range from simple
counting to writing resumes and filling out job applications. The
"Core Knowledge" (background info.) consists of a list of facts.
All activities are written for the indoors. There is only one
specific unit that adresses water but from the viewpoint of
manufacturing, marketing, accounting and sales of aquariums. A
unique approach to water education.
|WET in the City: Water Education for Teachers
||WET in the City is a compendium of activities that focus on water resources for urban classrooms, K-12. The activities are organized to address the following concepts: water has unique physical and chemical characteristics, water is essential for all life to exist, water connects to all Earth systems, water is a natural resource, water resources are managed, water resources exist within social constructs, and water resources exist within cultural constructs. The curriculum is only available as part of a workshop and requires the partnership of city government. As of 6/16/03 Washington DC, Los Angeles, Tulsa and Houston were the only cities participating. Check with Project WET about local participation. 713-520-1936.
|Wetland Ecosystems I
||This curriculum developed by Ducks Unlimited Canada is subtitled Habitats, Communities and the Diversity of Life. Nine lessons lead students through an exploration of wetlands. They gather information related to organisms that live in, on, or near water in wetlands, discovering interactions and interdependencies. Experiments highlight the impact of human activity in wetland ecosystems.
|Wetland Ecosystems II
||Subtitled Interactions and Ecosystems, this unit focuses on wetland types, energy pyramids, abiotic factors, feeding adaptations and organism relationships, population effects, and human interventions. It includes a field trip to a local wetland, building on the lessons and teaching students about sampling techniques, observation, teamwork, safety procedures, and data analysis.
|Wetland Ecosystems III
||Subtitled Evolution, Diversity and the Sustainability of Life, this unit's goal is "to help students enhance their understanding of the environmental, technological, and social aspects of science." It examines environmental impact assessment, socio-political considerations in environmental solutions, biodiversity, sustainable development, adaptations, natural selection, wetland types, pollution and taxonomy. A wetland field trip involves students in collection, measurement of water flow and water clarity, identification of plant and animal specimens, and markers of adaptation.
|Wetlands and Wildlife: Alaska Wildlife Curriculum Teacher Information Manuals and Guides
||Materials provide information and teaching activities about Alaska's wetland habitats
and animals for three different grade levels: K-3, 4-6, and junior/senior high school.
Included are wetlands awareness, wetland ecology, human ecology, human impacts on wetlands,
and migratory birds. The lower grade levels emphasixe ecology while the activities for higher
levels stress investigation and action skills. Field trip materials provide significant support
for issues investigation activities.
|Wetlands: A Major North America Issue. An Environmental Case Study
for Grades 6-9.
||This study guide applies wetland study to four Environmental
Education Goals: (1) science foundations; (2) issue awareness; (3)
issue investigation, and; (4) citizenship action. The author uses
Dr. Seuss's, The Lorax, as the sample case study at each Goal
Level. Students are introduced to several human values and beliefs
toward wetlands, as well as the affects of human presence on
wetlands in a "Wetland Issues Web." Students then collect and
analyze opinionnaires and questionnaires of the community's
perception of wetlands. This summarized data leads to the next
Goal Level, Citizenship Action, where students suggest solutions to
the identified problems. The author provides a section on Types of
Issue Action Methods to assist students and adults with citizenship
actions necessary to solve community issues.
|What is Water?: A Stream Becomes an Ocean.
||Materials cover the four topics listed in the title. Designed as
school curriculum or school enrichment. Includes leader and member
||This curriculum is divided into three modules: Vanishing Wetlands;
Gata Data: and Louisiana Redfish. Each unit includes a background
information unit plan and a video unit plan (the video accompanies
the curriculum). The curriculum is not clearly organized between the unit
plans and the video unit plans. All units strongly emphasize the
ecological and economical value of wetlands, redfish, and alligators.
All units incorporate ecological concepts including niche, habitat,
eurotophication, ecosystem, biotic and abiotic factors.
|Wise Water Ways
||Three units designed for third through fifth grades. Emphasizes
water conservation in a desert environment.
|Wonderful World of Water. A Curriculum Guide for Elementary Schools.
||Designed for the K-5 audience, the activities are divided into 4
units: the water cycle, water properties, water ecosystem, and
water use by humans. A few activities draw relationships between
water transport and human physiological functions, e.g., nutrient
transport per blood. Some activities may be too advanced for
primary grades and will have to be adapted. Authors include a list
of "Interdisciplinary Ideas" for the educator.
|World of Fresh Water
||World of Fresh Water: A Resource for Studying Issues of Freshwater Research was designed to promote understanding and appreciation of freshwater systems as plant and animal habitat for students in grades 4-6, but is adaptable for older students. The sixteen activities in this EPA-developed curriculum address water use, ecosystems, food chains, and pollution of fresh water. Students create and monitor pond models. They perform experiments that demonstrate the efficacy of dilution and bioremediation, the impact of pollutants on aquatic organisms, and bioaccumulation in life forms.
|WOW! The Wonders of Wetlands
||This is an educator's guide to providing activities to help kids
understand wetlands, the wetland community, and wetland issues.
Information is presented in a dense, but lively and attractive
format. One of a few curriculum that talks about "natural
pollution," and the effect of weather upon water quality. Excellent
use of kinesthetic games to demonstrate water-related dynamics.
Unique inset for some lessons called "Nature In Your Neighborhood."
Includes suggestions to modify activities for younger and more
advanced students. Materials include restoration and action guides.
Includes suggestion for community action projects at end.
|Your Impact on Salmon/Fish: A Self-Assessment
||This self-assessment tool for older students and adults queries personal behaviors that affect salmon habitat. Categories of assessment include water use, lawn care and landscaping, electricity consumption, septic system maintenance, storm drains, vehicles, stewardship, chemicals and hazardous waste, volunteerism and active involvement in policy-making. | http://www.uwex.edu/erc/eypaw/listall.cfm?summaries=yes | 13 |
15 | GUIDE 2: VARIABLES AND HYPOTHESES
GUIDE 3: RELIABILITY, VALIDITY, CAUSALITY, AND EXPERIMENTS
GUIDE 4: EXPERIMENTS & QUASI-EXPERIMENTS
GUIDE 5: A SURVEY RESEARCH PRIMER
GUIDE 6: FOCUS GROUP BASICS
GUIDE 7: LESS STRUCTURED METHODS
GUIDE 8: ARCHIVES AND DATABASES
5481 METHODS OF EDUCATIONAL RESEARCH
At this point, you are fairly itching to begin your design. But we still have important conceptual material to cover. After all, you want your measures to be reliable and valid, your statements about causality to be appropriate, and be able to generalize your findings.
In order to make any kind of causal assessments in your research situation, you must first have reliable measures, i.e., stable and/or repeatable measures. If the random error variation in your measurements is so large that there is almost no stability in your measures, you can't explain anything! Picture an intelligence test where an individual's scores ranged from moronic to genius level over a short period of time. No one would place any faith in the results of such a "test" because the person's scores were so unstable or unreliable.
Reliability is required to make statements about validity. However, reliable measures could be biased and hence "untrue" measures of a phenomenon) or confounded with other factors such as acquiescence response set. Picture a scale that always weighs five pounds too light. The results are reliable, but inaccurate or biased. Or, picture an intelligence test on which women or people of color always score lower (even if this doesn't occur on other tests). Again, the measure may be reliable but biased.
Note that some estimates of reliability are based on the number of items in the test or scale (Cronbach's Alpha is one example). Thus, we might have a long measure, with a lot of items, that will appear "reliable," yet when we examine the measure closely, we discover that the correlations among items are low. This means that items in that measure just don't seem to "hang together" or relate well to each other and your measure may be multidimensional. While this is a "judgement call," be advised that it is desirable for "reliable measures" to also be unidimensional measures, i.e., to measure one and only one construct. It is much easier to interpret unidimensional measures.
Internal validity addresses the "true" causes of the outcomes that you observed in your study. Strong internal validity means that you not only have reliable measures of your independent and dependent variables BUT a strong justification that causally links your independent variables to your dependent variables. At the same time, you are able to rule out extraneous variables, or alternative, often unanticipated, causes for your dependent variables. Thus strong internal validity refers to the unambiguous assignment of causes to effects. Internal validity is about causal control.
Laboratory "true experiments" have the potential to make very strong causal control statements. Random assignment of subjects to treatment groups (see below) rules out many threats to internal validity. Further, the lab is a controlled setting, very often the experimenter's "stage." If the researcher is careful, nothing will be in the laboratory setting that the researcher did not place there. When we leave the lab to do studies in natural settings, we can still do random assignment of subjects to treatments, but we lose control over potential causal variables in the study setting (dogs bark, telephones ring, the experimental confederate just got run over walking against the "don't walk" sign on West Tennessee.)
External validity addresses the ability to generalize your study to other people and other situations. To have strong external validity (ideally), optimally you need a probability sample of participants or respondents drawn using "chance methods" from a clearly defined population (all registered students at Florida State University in the Fall 2008 semester, for example). Ideally, you will have a good sample of groups (e.g., classes at all ability levels). You will have a sample of measurements and situations (you study who follows a confederate who violates the "don't walk" signs at different times of day, different days, and different locations on campus.) When you have strong external validity, you can generalize to other people and situations with confidence. Public opinion surveys typically place considerable emphasis on defining the population of interest and drawing good samples from that population. On the other hand, laboratory experiments often employ "convenience samples," such as intact college classes taught by a friend. As a result, we may not know whom the subjects represent.
Construct validity is about the correspondence between your concepts (constructs) and the actual measurements that you use. A measure with high construct validity accurately reflects the abstract concept that you are trying to study. Since we can only know about our concepts through the concrete measures that we use, you can see that construct validity is extremely important. It also becomes clear why it is so important to have very clear conceptual definitions of our variables. Only then can we begin to assess whether our measures, in fact, correspond to these concepts. This is a critical reason why you first worked with concepts, and only then began to work on operationalizing them.
If we only use one measure of a concept, about the best we can do is "face validity," i.e., whether the measure appears "on the face of it" to reflect the concept. Therefore, it is wise to use multiple measures of a concept whenever possible. Further, ideally these will be different kinds of measures and designs.
EXAMPLE: You might measure mathematical skill through a paper and pencil test, through having the student work with more geometric problems, such as a wood puzzle, and having the student make change at a cash register. Our faith that we have accurately measured her high math ability is stronger if she performs well on all three sets of tasks.
Construct validity is often established through the use of a multi-trait, multi-method matrix. At least two constructs are measured. Each construct is measured at least two different ways, and the type of measures is repeated across constructs. For example, each construct first might be measured using a questionnaire, then each construct would be measured using a similar set of behavioral observation categories.
Typically, under conditions of high construct validity, correlations are high for the same construct (or "trait") across a host of different measures. Correlations are low across constructs that are different but measured using the same general technique. Sometimes, this is called "triangulating" measures.
Under low construct validity, the reverse holds. Correlations are high across traits using the same "method" (or type of technique or measurement) but low for the same trait measured in different ways. For example, if our estimate of a student's math ability was wildly divergent depending on whether we examined scores on the questionnaire, making change, or the wood puzzle, we would have low construct validity and a corresponding lack of faith in the results.
One implication of all this material
is that, of course, we NEVER, NEVER say "intelligence is what this intelligence
test measures." Or any other single kind of "test" or assessment, of course.
There are many ways of knowing, and different cultures and subcultures use different expectations and norms about proof and causality. Causality is critical: it tells us what is possible, what can be changed and what is difficult, if not impossible, to change. For example, if you are convinced that biological factors cannot be overcome, you probably will not work with visually impaired children because you would believe that they could not compensate for their disabilities. Causality tells us what are the “prime movers” of the phenomena that we observe.
Consider some different perspectives on causality:
Here are some different ways and means of "proof":
Much of the research process centers around what are the true causal or “independent variables.” What we initially may consider to be “true causal” variables may, instead, turn out to be artifacts of the research process (e.g., questionnaire format response set or experimental reactivity or confounded treatment effects) or the particular group that we studied. Much of science consists of ruling out alternative causes or explanations. While science is one form of knowing and one generic way of gathering evidence that either disconfirms or is suggestive of causality, it is not the only way of doing so. The results of science may or may not be accurate, but without following "the rules" of science, most scientists do not believe one is "doing science." Considerable disagreement occurs between scientists and members of the general public because scientists don't make it clear how our methods of "proof" differ from those commonly used among the general public (e.g., legal arguments).
According to science rules, definitive proof via empirical testing does not exist. Science uses the term "proof" (or, rather, "disproof") differently from the way attorneys or journalists do. Our measurements could be later shown to be contaminated by confounding factors. A correlation could have many causes, only some of which have been identified. Later work can show earlier causes to be spurious, that is, both cause and effect depend on some prior causal (often extraneous) variable. Statistics are NEVER EVER considered to "prove" anything although statistical results CAN disconfirm.
Further, science is a self-correcting process. Another researcher can try to duplicate your results. If your results are interesting, in fact, dozens of researchers may try to duplicate your results. If something was awry with your study, the subsequent research projects should discover and correct this.
We use the rules of science in this
|Cancerous Human Lung
This dissection of human lung tissue shows light-colored cancerous tissue in the center of the photograph. While normal lung tissue is light pink in color, the tissue surrounding the cancer is black and airless, the result of a tarlike residue left by cigarette smoke. Lung cancer accounts for the largest percentage of cancer deaths in the United States, and cigarette smoking is directly responsible for the majority of these cases.
"Cancerous Human Lung," Microsoft(R) Encarta(R) 96 Encyclopedia. (c) 1993-1995 Microsoft Corporation. All rights reserved.
|Most people--and most scientists--accept that smoking cigarettes causes lung cancer although the evidence (for humans) is strictly correlational rather than experimental. There are many topics where it is neither possible--nor desirable--to use the experimental method. To accept more correlational evidence it will help to examine the rules below. (SCL)|
Many scientists believe that the ONLY way to establish causality is through randomized experiments. That is one reason why so many methods text books designate experiments–and only experiments--as “quantitative research.”
I have never quite understood, by the way, how the numeric level of one's measures can have much to do with cause. After all, variables such as gender, nationality, and ethnicity can have profound casual effects and they are categorical variables. Authors who make this mistake may also misunderstand causality.
Indeed a moment’s reflection will convince you that experiments are far from the only way to establish causality. Most people now accept that smoking cigarettes causes lung cancer (see the Encarta selection above)–yet no society has ever randomly assigned half its population to smoke cigarettes and the other half not (although there are some experiments with rats). This causal conclusion about smoking and lung cancer is based on correlational or observational evidence, i.e., observing the systematic covariation of two (or more) variables. Cigarette smoking and lung cancer are both "naturalistic" variables, i.e., we must accept the data as nature gave them to us (some authors call these "organismic" variables for "organic.")
There is no doubt that the results from careful, well-controlled experiments are typically easier to interpret in causal terms than results from other methods. However, as you can see, causal inferences are often drawn from correlational studies as well. Non-experimental methods must use a variety of ways to establish causality and ultimately must use statistical control, rather than experimental control. The results of the Hormone Replacement Therapy experiments, released in the summer of 2002, remind us of the great care that must be taken when designing nonexperimental research.
If one variable causes a second variable, they should correlate thus causation implies correlation. However, two variables can be associated without having a causal relationship, for example, because a third variable is the true cause of the "original" independent and dependent variable. For example, there is a statistical correlation over months of the year between ice cream consumption and the number of assaults. Does this mean ice cream manufacturers are responsible for crime? No! The correlation occurs statistically because the hot temperatures of summer cause both ice cream consumption and assaults to increase. Thus, correlation does NOT imply causation. Other factors besides cause and effect can create an observed correlation.
If one variable
causes a second, the cause is the independent variable (explanatory
variables or predictors).
The effect is the dependent variable (outcome or response variable).
If you can designate a distinct cause and effect, the relationship is called asymmetric.
For example, most people would agree that it is nonsense to assume that contacting lung cancer would lead most individuals to smoke cigarettes. For one thing, it takes several years of smoking before lung cancer develops. On the other hand, there is good reason to believe that the carcinogens in tobacco smoke could lead someone to develop lung cancer. Therefore, we can designate a causal variable (smoking) and the relationship is asymmetric.
Two variables may be associated but we may be unable to designate cause and effect. These are symmetric relationships.
For example, men
over 30 with greater mental health scores are more likely to be married
in the U.S. Aha! Marriage is a "buffer" protecting from the stresses of
life, and therefore it promotes greater mental health. Wait! Perhaps the
causal direction is the reverse. Men who are in better mental shape to
begin with get married. Maybe both are true...When we cannot clearly
designate which variable is causal, we have a symmetric relationship.
RULES AND GUIDANCE
Since we know that we cannot use experimental treatments in naturalistic variables to determine cause and effect, yet we know that scientists can and do draw causal conclusions in nonexperimental studies, here is a set of helpful rules for tentatively establishing causality in correlational data.
For a more detailed discussion, I recommend the following book:
Barbara Schneider, Martin Carnoy, Jeremy Kilpatrick, William H. Schmidt, Richard J. Shavelson (2007): Estimating Causal Effects: Using Experimental and Observational Designs. A think tank white paper prepared under the auspices of the AERA Grants Program.
You can actually download this book FOR FREE from the American Educational Research Association by clicking HERE!
By the way, there are always alternative causal explanations in experiments too. The study control group may be flawed. Participants' awareness of being studied may create conditions (e.g., anxiety) that mean we do not measure "true" behavior or performance. So even though it may be easier to establish cause in experiments, keep in mind that nothing is fool-proof.
(1) TIME ORDER. The independent variable came first in time, prior to the second variable.
EXAMPLE: Gender or race are fixed at birth. Gender or race can be important causal variables because individuals behave differently toward males or females, and often behave differently toward individuals of different religions or ethnicities.
(2) EASE OF CHANGE. The independent variable is harder to change. The dependent variable is easier to change.
EXAMPLE: One's gender is harder to change than scores on an assessment test or years of school.
(3) "MAJORITY RULE." The independent variable is the cause for most people.
EXAMPLE: Although some people become so fed up with their jobs that they return to school to train for a better job, most people complete their education prior to obtaining a regular year-round, full-time job.
(4) NECESSARY OR SUFFICIENT. If one variable is a necessary or sufficient condition for the other variable to occur, or a prerequisite for the second variable, then the first variable may be the cause or independent variable.
EXAMPLES: A certain type of college degree is often required for certain jobs. At most universities, publications are a prerequisite for being awarded tenure.
(5) GENERAL TO SPECIFIC. If two variables are on the same overall topic and one variable is quite general and the other is more specific, the general variable is usually the cause.
EXAMPLE: Overall ethnic intolerance influences attitudes toward Hispanics.
(6) THE "GIGGLE" OR "SANITY" FACTOR. If reversing the causal order of the two variables seems illogical and makes you laugh, reverse the causal order back.
EXAMPLES: We don't believe choosing a specific college major or engaging in a particular sport determines one's gender.
MEMORIZE THESE SIX RULES.
You will apply them during exams and assignments all semester!
Dedicated to health and fitness, you devised a new exercise plan that you believe will really help people. So you obtain a sample of Educational Psychology undergraduate students. With the flip of a coin, half the students receive a physical and mental health screening and those who are fit begin this new exercise program. The other half also receive a health screening but no exercise regimen. Six weeks later, you re-examine everyone who was physically fit in the screening and compare the two groups. The group receiving the exercise plan now score happier and healthier than the group that did not.
Jubilant over the results, you assert that your new exercise plan contributes to physical and mental fitness!
Or does it? Are your results internally valid?
This study was a "true experiment." In a true experiment--whether laboratory, field, or simulation--participants are randomly assigned to treatment groups using a coin flip or some other type of probability, non human judgment method. It is randomization that makes true experiments so strong in internal validity and typically allows us to make relatively strong influences about causality. It is also random assignment to treatments that distinguishes a true experiment from other kinds of data collection.
Random assignment means that on the average at the beginning of a study, all your treatment groups are about the same. In your physical fitness study, it meant about the same percent of each group "flunked" the screening test and about the same percent exercised on a regular basis, even before your intervention.
Random assignment or "randomization" controls at the beginning for all the variables you can think of, and, more important, all the variables you didn't think of.
This study had another important research design aspect: a control group which did not receive the special exercise program. Control or comparison groups are critical in all kinds of research. If we did not have a control or comparison group, the study would be open to the criticism--and alternative causal explanation--that improvement in health would have occurred in any event among young adults, even had the exercise program never been instituted. Not only did you have a control group, but, in an experiment, participants are randomly assigned to it.
Studies that lack a control group are sometimes called "one shot" studies or sometimes case studies. While the results may be interesting, we are limited in the causal implications we can make from the results of "one shot" research.
We will later examine facets of the "good" control group.
You are pretty sure that you know what improved the health of your experimental subjects: the new exercise program you initiated. And there is a good chance that you are right, because by using random assignment you controlled for several pre-existing conditions or threats to internal validity: participants' general physical health, previous exercise patterns, incidence of depression or their general personal histories which, on the average, would be the same for each group. By using random assignment, you also controlled for any incidental historical conditions (such as an influenza outbreak that year which could influence health in both groups).
Your study has two other important features: a pretest and a posttest. In the pretest, you measured existing conditions on your dependent variables, i.e., mental and physical health among all your participants, whether in the experimental or control group, prior to any intervention at all. This enables you to double-check that your participants were pretty much alike across groups at the beginning of the study. You can also assess the level of change because you have both pretest and posttest information. Then, after your intervention, you reassessed scores on your dependent variables in a posttest. A posttest only design cannot do either of these important sets of measures.
This is often called a "pretest-posttest" experimental design.
You should be advised, however, that the standard pretest-posttest design may pose some threats to internal validity, or the unambiguous assignment of cause and effect. Why? Because simply being measured or observed during the pretest may sensitize some participants and they will behave differently as a result. (For example, being weighed might have sent all subjects to the exercise room for six weeks!) Further, a pretest may interact with an experimental treatment to heighten the effect of the experimental intervention more than it would have ordinarily.
How can you cope
with this dilemma? One way is the famous Solomon
Four Group Design, considered one of the strongest
experimental designs with respect to internal validity. In
the Solomon Four Group Design, there are four randomized groups of participants.
One group receives a pretest, the experimental treatment and a posttest.
The second group is identical, except it does not receive a pretest. The
third group receives a pretest and posttest but a different treatment
(this could be a group that receives no treatment at all, for example).
The final group receives only a posttest and the second treatment
(such as no treatment). Below is a diagram of the Solomon Four Group Design:
|GROUP ONE||Pretest||Treatment 1||Posttest|
|GROUP TWO||Treatment 1||Posttest only|
|GROUP THREE||Pretest||Treatment 2||Posttest|
|GROUP FOUR||Treatment 2||Posttest only|
Solomon Four Group Designs are more expensive because they require more participants and conditions than other types of experimental treatments. But, many researchers believe the advantages are worth the expense.
We will revisit experiments, and compare them with "quasi experiments", in Guide 4.
Some textbooks imply that "intact groups" cannot be part of a "true experiment." This is not necessarily true so assess each situation carefully to see if a true experiment is possible.
Suppose you are studying fourth grade classes. The major way the school divides its fourth grade students into classes is through a systematic alphabetical list. If there are five fourth grade classes, every fifth student goes to Class 1, Class 2, and so on. In other words, there is no reason at this particular school to believe any of the fourth grade classes is distinctive at the very beginning of the school year. If you randomly assign classes to different experimental treatments in this example, you will indeed have a "true experiment." The key is that the intact groups were pretty much assembled using random means in the first place.
Also, if it is the very beginning of the academic year, students in the different classes have not been exposed to different teachers or teaching methods. This will not be true later in the year. If you come in and do your experiment at the very beginning and before the different teachers have made assignments, begun in-depth lessons, etc., you probably do have a "true experiment."
On the other hand, suppose there was a systematic difference among groups before you applied any kind of intervention, such as Honors classes versus regular classes in school. In such a case, even random assignment of intact groups could not produce a true experimental design. The problem is particularly great if a difference between groups relates to a variable you want to study. For example, Honors math students may react differently to a new way of teaching algebra than students in regular classes.
So, study the situation
carefully. "True experiments" with intact groups are possible, but only
under a very restricted set of conditions. If you don't meet those conditions,
it is more likely that you have a "quasi-experiment," which we will examine
Measure carefully. Measure more than once. Use more than one measure of a construct.
Avoid bias, such as the bathroom scale that always measures 5 pounds too light.
Susan Carol Losh
September 9 2002
Revised February 11 2009
This page was built with Netscape Composer | http://mailer.fsu.edu/~slosh/MethodsGuide3.html | 13 |
19 | In chemistry, chemical synthesis means using chemical reactions to get a product, or several products. This happens by physical and chemical manipulations. Often, several different chemical reactions are used; one after another. In modern laboratory usage, a chemical synthesis is reproducible (if the experiment is done a second time, it will have the same results as the first time), reliable (not broken by small changes in conditions), and established to work in multiple laboratories.
Chemists start to design a chemical synthesis by selecting compounds to combine. These starting compounds are known as reagents or reactants. Chemists use various reaction types to these to synthesize the product, or an intermediate product. This requires mixing the compounds in a reaction vessel. (The vessel can be a chemical reactor or a simple round-bottom flask.) Many reactions require some form of work-up procedure before the final product is isolated.
The amount of product in a chemical synthesis is the reaction yield. Typically, chemical yields are expressed as a weight in grams or as a percentage of the total theoretical quantity of product that could be produced. A side reaction is an unwanted chemical reaction taking place that reduces the yield of the wanted product.
The chemist Adolph Wilhelm Hermann Kolbe was the first to use the word synthesis in its present day meaning.
In most cases, a single reaction will not convert a reactant (starting chemical) into the desired reaction product. Chemists have many strategies to find the best sequence of reactions to make the desired product. In cascade reactions multiple chemical changes take place within a single reactant. In multi-component reactions up to 11 different reactants form a single reaction product. In a telescopic synthesis, one reactant goes through multiple transformations without isolating intermediates after each step.
Organic synthesis [change]
Organic synthesis is a special type of chemical synthesis. Only organic compounds are created in organic synthesis. The total synthesis of a complex product may take many steps to reach the goal product. These steps can take too much time. Chemists want to have skill in organic synthesis and being able to find a synthesis path with the least number of steps. The synthesis of very valuable or difficult compounds has earned chemists, such as Robert Burns Woodward, the Nobel Prize in Chemistry.
If a chemical synthesis starts from basic laboratory compounds and yields something new, it is a "purely synthetic process". If it starts from a product isolated from plants or animals and then proceeds to a new compounds, the synthesis is called a "semisynthetic process".
Other meanings [change]
Most times, chemical synthesis means the overall, many step procedure for making a desired product. Sometimes, chemists use "chemical synthesis" to mean just a direct combination reaction. In a direct combination reaction, two or more reactants combine to form a single product. The chemical equation for a direct combination reaction is:
- A + B → AB
- 2Na + Cl2 → 2 NaCl (formation of table salt)
- S + O2 → SO2 (formation of sulfur dioxide)
- 4 Fe + 3 O2 → 2 Fe2O3 (iron rusting)
- CO2 + H2O → H2CO3 (carbon dioxide dissolving and reacting with water to form carbonic acid)
Four special synthesis rules are:
- metal-oxide + H2O → metal(OH)
- non-metal-oxide + H2O → oxi-acid
- metal-chloride + O2 → metal-chlorate
- metal-oxide + CO2 → metal carbonate (CO3)
Other pages [change]
- Vogel, A.I., Tatchell, A.R., Furnis, B.S., Hannaford, A.J. and P.W.G. Smith. Vogel's Textbook of Practical Organic Chemistry, 5th Edition. Prentice Hall, 1996. ISBN 0582462363. | http://simple.wikipedia.org/wiki/Chemical_synthesis | 13 |
45 | Form of argument that, in its most commonly discussed instances, has two categorical propositions as premises and one categorical proposition as conclusion. An example of a syllogism is the following argument: Every human is mortal (every M is P); every philosopher is human (every S is M); therefore, every philosopher is mortal (every S is P). Such arguments have exactly three terms (human, philosopher, mortal). Here, the argument is composed of three categorical (as opposed to hypothetical) propositions, it is therefore a categorical syllogism. In a categorical syllogism, the term that occurs in both premises but not in the conclusion (human) is the middle term; the predicate term in the conclusion is called the major term, the subject the minor term. The pattern in which the terms S, M, and P (minor, middle, major) are arranged is called the figure of the syllogism. In this example, the syllogism is in the first figure, since the major term appears as predicate in the first premise and the minor term as subject of the second.
Learn more about syllogism with a free trial on Britannica.com.
Each of the three distinct terms represents a category, in this example, "human," "mortal," and "Socrates." "Mortal" is the major term; "Socrates," the minor term. The premises also have one term in common with each other, which is known as the middle term — in this example, "human." Here the major premise is universal and the minor particular, but this need not be so. For example:
Here, the major term is "die", the minor term is "men," and the middle term is "[being] mortal things." Both of the premises are universal.
A sorites is a form of argument in which a series of incomplete syllogisms is so arranged that the predicate of each premise forms the subject of the next until the subject of the first is joined with the predicate of the last in the conclusion. For example, if one argues that a given number of grains of sand does not make a heap and that an additional grain does not either, then to conclude that no additional amount of sand will make a heap is to construct a sorites argument.
The premises and conclusion of a syllogism can be any of four types, which are labelled by letters as follows.
|A||All||S||are||P||universal affirmatives||All humans are mortal.|
|E||No||S||are||P||universal negatives||No humans are perfect.|
|I||Some||S||are||P||particular affirmatives||Some humans are healthy.|
|O||Some||S||are not||P||particular negatives||Some humans are not clever.|
(See Square of opposition for a discussion of the logical relationships between these types of propositions.)
By definition, S is the subject of the conclusion, P is the predicate of the conclusion, M is the middle term, the major premise links M with P and the minor premise links M with S. However, the middle term can be either the subject or the predicate of each premise that it appears in. This gives rise to another classification of syllogisms known as the figure. The four figures are:
|Figure 1||Figure 2||Figure 3||Figure 4|
Putting it all together, there are 256 possible types of syllogisms (or 512 if the order of the major and minor premises is changed, although this makes no difference logically). Each premise and the conclusion can be of type A, E, I or O, and the syllogism can be any of the four figures. A syllogism can be described briefly by giving the letters for the premises and conclusion followed by the number for the figure. For example, the syllogisms above are AAA-1.
Of course, the vast majority of the 256 possible forms of syllogism are invalid (the conclusion does not follow logically from the premises). The table below shows the valid forms of syllogism. Even some of these are sometimes considered to commit the existential fallacy, thus invalid. These controversial patterns are marked in italics.
|Figure 1||Figure 2||Figure 3||Figure 4|
A sample syllogism of each type follows.
Forms can be converted to other forms, following certain rules, and all forms can be converted into one of the first-figure forms.
Syllogism dominated Western philosophical thought until The Age of Enlightenment in the 17th Century. At that time, Sir Francis Bacon rejected the idea of syllogism and deductive reasoning by asserting that it was fallible and illogical. Bacon offered a more inductive approach to logic in which experiments were conducted and axioms were drawn from the observations discovered in them.
In the 19th Century, modifications to syllogism were incorporated to deal with disjunctive ("A or B") and conditional ("if A then B") statements. Kant famously claimed that logic was the one completed science, and that Aristotelian logic more or less included everything about logic there was to know. Though there were alternative systems of logic such as Avicennian logic or Indian logic elsewhere, Kant's opinion stood unchallenged in the West until Frege invented first-order logic.
Still, it was cumbersome and very limited in its ability to reveal the logical structure of complex sentences. For example, it was unable to express the claim that the real line is a dense order. In the late 19th century, Charles Peirce's discovery of second-order logic revolutionized the field and the Aristotelian system has since been left to introductory material and historical study.
For instance, given the following parameters: some A are B, some B are C, people tend to come to a definitive conclusion that therefore some A are C. However, this does not follow. For instance, while some cats (A) are black (B), and some black things (B) are televisions (C), it does not follow from the parameters that some cats (A) are televisions (C). This is because first, the mood of the syllogism invoked is illicit (III), and second, the supposition of the middle term is variable between that of the middle term in the major premise, and that of the middle term in the minor premise (not all "some" cats are by necessity of logic the same "some black things").
Determining the validity of a syllogism involves determining the distribution of each term in each statement, meaning whether all members of that term are accounted for.
In simple syllogistic patterns, the fallacies of invalid patterns are: | http://www.reference.com/browse/syllogism | 13 |
21 | In chemistry, chemical synthesis is purposeful execution of chemical reactions in order to get a product, or several products. This happens by physical and chemical manipulations usually involving one or more reactions. In modern laboratory usage, this tends to imply that the process is reproducible, reliable, and established to work in multiple laboratories.
A chemical synthesis begins by selection of compounds that are known as reagents or reactants. Various reaction types can be applied to these to synthesize the product, or an intermediate product. The amount of product in a chemical synthesis is the reaction yield. Typically, chemical yields are expressed as a weight in grams or as a percentage of the total theoretical quantity of product that could be produced. A side reaction is an unwanted chemical reaction taking place that diminishes the yield of the desired product.
The word synthesis in the present day meaning was first used by the chemist Adolph Wilhelm Hermann Kolbe.
Many strategies exist in chemical synthesis that go beyond converting reactant A to reaction product B. In cascade reactions multiple chemical transformations take place within a single reactant, in multi-component reactions up to 11 different reactants form a single reaction product and in a telescopic synthesis one reactant goes through multiple transformations without isolation of intermediates.
Organic synthesis is a special branch of chemical synthesis dealing with the synthesis of organic compounds. In the total synthesis of a complex product it may take multiple steps to synthesize the product of interest, and inordinate amounts of time. Skill in organic synthesis is prized among chemists and the synthesis of exceptionally valuable or difficult compounds has won chemists such as Robert Burns Woodward the Nobel Prize for Chemistry. If a chemical synthesis starts from basic laboratory compounds and yields something new, it is a purely synthetic process. If it starts from a product isolated from plants or animals and then proceeds to a new compounds, the synthesis is described as a semisynthetic process.
The other meaning of chemical synthesis is narrow and restricted to a specific kind of chemical reaction, a direct combination reaction, in which two or more reactants combine to form a single product. The general form of a direct combination reaction is:
- A + B → AB
- 2Na + Cl2 → 2 NaCl (formation of table salt)
- S + O2 → SO2 (formation of sulfur dioxide)
- 4 Fe + 3 O2 → 2 Fe2O3 (iron rusting)
- CO2 + H2O → H2CO3 (carbon dioxide dissolving and reacting with water to form carbonic acid)
4 special synthesis rules:
- metal-oxide + H2O → metal(OH)
- non-metal-oxide + H2O → oxi-acid
- metal-chloride + O2 → metal-chlorate
- metal-oxide + CO2 → metal(CO3)
- Chemical engineering
- Template-directed synthesis
- Organic synthesis
- Total synthesis
- Peptide synthesis
- Methods in Organic Synthesis
- ar:تخليق كيميائي
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies | http://www.wikidoc.org/index.php/Chemical_synthesis | 13 |
16 | MAT 2013: Critical reasoning tipsArti Vadnerkar175|0Last Updated:11:49 am, March 07, 2013
Critical reasoning is a ‘Math’ oriented section even though numbers doesn’t exist in the problems.
Argument and conclusion
Critical reasoning is based on factual evidence and draws a conclusion that may or may not rely on certain unspoken assumptions. The conclusion is usually based on relationship between several “entities” discussed in the argument. Entities can be people, groups of people, money, businesses, or just about any noun.
3 things to identify in a critical reasoning problem:
-Identify an unspoken assumption made by the argument
-Identify additional evidence that would strengthen the conclusion of the argument
-Draw an inference or conclusion based on the given evidence
3 tips to solve critical problem:
Take a moment to identify the conclusion and the evidence (statements of fact) and assumptions (unstated ideas) used in an argument. Remember, conclusion usually includes words like “as a result,” or “therefore.
You should know the terminologies used for the assumption which could be inference, evidence, conclusion, paradox, logical flaw, etc. As you go through MAT practice tests, take note of such words in the argument.
If a question asks you for a statement which best weakens an argument, then look for answer choices which help you to strengthen the argument.
Image courtesy: photos.com | http://www.indiancolleges.com/entrance-exam-news/MAT-2013-Critical-reasoning-tips/3700 | 13 |
17 | Algebra Fundamental Operations
quantities quotient rule term product divisor divisible dividend quantity powers
ALGEBRA FUNDAMENTAL OPERATIONS The primary operations in algebra are the same as in common arithmetic - namely, addition, subtraction, multiplication, and division ; and from the various combinations of these four, all the others are derived.
Rule. Add together the coefficients of the quantities, prefix the common sign to the sum, and annex the letter or letters common to each term.
Rule. Add the positive coefficients into one sum, and the negative ones into another; then subtract the less of these sums from the greater, prefix the sign of the greater to the remainder and annex the common letter or letters as before Case 3. To add unlike quantities.
The reason of the rule for subtraction may be explainc,1 thus. Let it be required to subtract 2p - 32 from m+n. If we subtract 2p from m+ n, there will remain m+ n - 2p, but if we are to subtract 2p - 3q, which is less than 2p, it is evident that the remainder will be greater by a quantity equal to 3q; that is, the remainder will be IR n - 2.o + 3q, hence the reason of the rule is evident.
III. .3fultipl-leation, This rule, which is given by Diophantusl as the definition of + and -, may be said to constitute the basis of algebra as distinct from arithmetic.
If we admit the definitions given above, the rule may be demonstrated in the following way :-- + a x + b = + ab is assumed.
+a x - b will have the same value, whatever - b may be connected with, as it has when - b is connected with + b (Law 1).
Now +a x ( +b-b)= +ax +0=0 (Def.) But +ax( +b-b1= +ax +b, and +ax -b (Law 2).
+a x +b and +a x - b make up 0 • i.e., + ab and + a x -b make up O.
Now +ab-ab= 0, :. +a x - b = -ab.
Similarly -ax -b= +ab.
The examples of multiplication may be referred to two cases ; the first is when both the quantities are simple, and the second when one or both of them are compound.
Case 1. To multiply simple quantities.
Rule. Find the sign of the product by the general rule, and annex to it the product of the numeral coefficients; then set down all the letters, one after another, as in one word.
Case 2. To multiply compound quantities.
Rule. Multiply every term of the multiplicand by all the terms of the multiplier, one after another, according to the preceding rule, and collect their products into one sum, which will be the product required.
When several quantities are multiplica together so as to constitute a product, each of them is called a factor of that product : thus a, b, and c are factors of the product abc ; also, a + x and b - x are factors of the product (a + x) . (b - x).
The products arising from the continual multiplication of the same quantity are called powers of that quantity, which is called the root. Thus aa, acia, aaaa, &c., are powers of the root a. These powers are commonly expressed by placing above the root, towards the right hand, a figure, denoting how often the root is repeated. This figure serves to denominate the power, and is called its index or exponent. Thus, the quantity a being considered as the root, or as the first power of a, we have aa or a2 for its second power, aaa or a3 for its third power, aaaa or a4 for its fourth power, and so on.
The second and third powers of a quantity are generally called its square and cube.
By considering the notation of powers, and the rules for multiplication, it appears that powers of the same root are multiplied by adding their exponents. Thus a x a3 = a4, also x3 x x4= x7; and in general a"' x = a"'+".
When the quantities to be multiplied appear under a symmetrical form, the operation of multiplying them may sometimes be shortened by detached coefficients, by symmetry, and by general considerations suggested by the particular examples under consideration.
Ex. 1. Multiply x4 - 3x3 + 2x2 - Tx + 3 by x2 - 5x + 4. Here the powers of x occur in regular order, so that we need only write down the coefficients of the several terms during the operation, having it in our power to supply the x's whenever we require them ; we write, therefore, - The last line (for which the result might have been written down in full at once) is equivalent to x6 - 8x5 + 21x4 - 29x3+ 46x2 - 43x + 12 .
When any terms are wanting, they may be supplied by zeros ; thus, Ex. 2. Multiply x4 - 7x3+ x - 1 by X3 - X + 2.
We may take advantage of symmetry by two considerations either separately or combined.
(1.) Symmetry of a Symbol.
Ex. Find the sum of (a + b - 2c)2 + (a + c - 2b)2 + (b + c - 2a)2.
Here a2 occurs with 1 as a multiplier in the first square, with 1 as a multiplier in the second square, and with 4 as a multiplier in the third square, Gat is part of the result ; ab occurs with 2 as a multiplier in the first square, with - 4 in the second, and with - 4 in the third - Gab is part of the result.
But a2, b2, c2, are similarly circumstanced, as also ab, ac, be ; hence the whole result must be G(a2 + L2 + c2 - ab - . ac - be).
(2.) Symmetry of an Expression.
Ex. Find the sum of (a + b + c) (x + y + 2) + (a + b - c) (x- fFirst, the product of (a + b + c) by x + y + z is to be found by multiplying out term by term.' It is ax + ay + az + bx + by +bz + cx + cy + cz.
The product of (a + b - c) (x + y - z) is now simply written, down from the above, by changing the sign of every term which contains one only of the two quantities affected with a - sign, i.e., in this case c and z.
Lastly, the four products may be arranged below each other, the signs alone being written down ; thus, and the sum required is therefore 4ax + 4by + 4cz.
Now a, b, c are similarly involved in (a + b + c)3 ; .'. Ls and c3 must appear along with a3, 3,2c, 3b2a, &c., along with 3a2b, and hence we can at once write down all the terms except that which contains abc. To obtain the coefficient of abc, we observe that if a, b, and c, are each equal to 1, (a + b + c)3 is reduced to 33 or 27. In other words, there are 27 terms, if we consider 3a2b and every similar expression as three terms; and as the terms preceding abc are in this way found to be 21 in number, we require Gabe to make up the full number 27; It is desirable to introduce here some examples of the application of the process of the substitution of a letter for any number or fraction to the properties of numbers, inequalities, &c.
Properties of Numbers.
Ex. 1. If unity is divided into any two parts, the difference of their squares is equal to the difference of the parts themselves.
Let x stand for one part ; 1 - x for the other.
i.e., the difference of the squares of the parts is equal to the difference of the parts.
Ex. 2. The product of three consecutive even numbers is divisible by 48.
Let 2n, 23t+2, 2n+4, be the three numbers their product is 8n(n + 1)(n + 2). Now, of three consecutive numbers, n, v.+ 1, n + 2, one must be divisible by 2, and one by 3, n(n + 1)(n + 2) is divisible by 6, whence the proposition.
Ex. 3. The sum of the squares of three consecutive odd numbers, when increased by 1, is divisible by 12, but never by 24.
Let 221, - 1, 2n +1, 2n + 3, be the three odd numbers.
The sum- of their squares when increased by 1 is 12n2+ 12n+ 12 =12(n2+n +1)=12(n. gt + 1 +1).
Now, either n or 77, + I is even, 2/(21, + 1) + 1 is odd ; hence the sum under consideration is 12 times an odd number, whence the proposition.
Additional Examples in Symmetry, (C.c.
Ex. 1. (a+S+c)2+(a+1) - c)2+(a+c-5)2+(b+c - a)9 = 4(a2+ 52+ c2).
This is written down at once, from observin,n that a2 occurs in each of the four expressions, and that tab occurs with a + sign in two, and with a - sign in the other two. There is no other form.
Ex. 2. (a+b+c)3+(a+b - c)3+(a+c - b)3+(b+c - a)3 2(a3+ 53+ c3)+6(a2b + a2c+ Ida+ b2c+ Oa+ e25)-12abc. 1st, a3 occurs + in three, and - in one term.
2d, 3a2b occurs + in three, and - in one term.
3d, When a, 5, c are all units, the number resulting is 30; there are 30 terms, and as (1st) and (2d) make up 42, there fall to be subtracted 12, i.e., the coefficient of abc is - 12.
Ex. 3. (ax + by + cz)2 + (ax+ cy +bz)2 +(bx + ay + ez)2 + (bx+ cy + az)2 + (cx + ay + bz)2 + (ex +by + az)2 = 2(a2+ 52 + ,2) (x2+ y2 + 12) + 4(a b ac+bc)(xy +xz+ yz).
Ex. 4. The difference of the squares of two consecutive numbers is equal to the sum of the numbers.
Ex. 5. The sum of the cubes of three consecutive numbers is divisible by the sum of the numbers.
Ex. 6. If x is an odd number, x5 - x is divisible by 24, and (x2+3) (x2+ 7) by 32.
Ex. 7. If (pq_ r)2 4(p2 g)(pr q)2 = then will 4(p2 - q)3= (2p3 - 3p1+2.)2, and 4(72 - P1)3= (273 - 3Pqr +r2)2.
Let the left hand side equal the right + u ; then multiplying out, when a is greater than 5, and b greater than c; then is y = 0. As the argument concerns y, multiply out, and arrange in order of powers of y. After reduction this results in (a2 - ,2)b4114+ (a2_ oxb2_ c2),2x2+ (a2 b2),2,2152v2 + ((52..L c9a2x2_ b2),2,212=0.
Now each of these three terms is a positive quantity, if it he not zero, and as the sum of three positive quantities cannot be equal to zero, it follows that each term must be separately equal to zero, The demonstrations of inequalities are of so simple and instructive a character, that a somewhat lengthened exhibition of them forms a valuable introduction to the higher processes of the science. In all that follows under this head, the symbols x, y, z stand for positive numbers or fractions, usually unequal.
Because (x - y)2 is +, whether x be greater or less than y, it follows that x2 - 2xy+ y2 is +, i.e., is some positive number or fraction, It will be remarked that wlp.::a x and y are equal, the inequality rises into an equality, and this is common to all inequalities of the character under discussion.
Ex. 6. The arithmetic mean of any number of quantities (all positive) is greater than the geometric.
(The arithmetic mean is the sum of the quantities divided by their number ; the geometric is that mot of their product which is represented by their number.) Let the quantities be denoted by xj, x2, x3.....-„, the numbers 1, 2, 3, placed under the x, indicating order only, so that x1 may be read the first x, x2 the second x, Sc. Exx14-x„ ample 1 gives - ,/x1x2, if we suppose the x and y of that example to be „,/x1, ,,/x2 of the present.
In the same way we prove the proposition for 8, 16, or any number of quantities which is a power of 2.
For any other number, such, for instance, as 5, the following process is employed : - The number is made up to 8 by the insertion of three quantities, each equal to the arithmetic mean of the other five, viz., Call this quantity y; then y3>x,x2 . . .
xi+x3+ ' • • x.s y or > 2' 3` 4'4 a• 5 Col% As a particular case, x3+ y3+23>3xyz.
Ex. 7. Given xix, ... x,, = y", to prove that (1 + x1) (1+x„)... (1 -1-x„)>(1+y)'.
The demonstration will be perfectly general in fact, though limited in form, if we suppose the number of quantities to be 5; in which case, x1x2 .. . x5= y5.
(1+ x3) (1 + x4)>(1 + (1 + x2)(1 + y) >(1 + Vx,y)2 (1+0(1+Y) = (1+ 's/Y!/)2 Multiplying these products together, and combining the right hand factors two and two, (1 +x,)(1 + x,) (1+;)(1 > -((1+ Nix1x2)(1 + NIX74)(1+,rir7y) (1 +y))2 > ((1 + ,YX1X,X.,X4)(1 + > (1 + 41x,x,x,,x4x.,y3)2 >(1 + (1 + xl) (1 + x2) . . . . (1 + x)> (1 + y)5 .
Ex. 8. If the sum of n fractions makes up 1, the sum of their reciprocals is greater than the square of their number.
Let xl+x2+ x„=1, then, - +- + . . . 1 + >7?, 1 (example 6).
xi x, x„ x1x2 . . . x„ But tiix,x3 . . . x„.< x3+ • x" (example 6)< 7?, >1/, Vxix2 ... x,, whence 1 - +-1 + ±-1 >n2.
x, x2 s„ x2"n+1(„ n+1( x+x3+ . . . x2.--i < Vw, + :7" 21/, nn Let the numerator and denominator of this fraction be designated by N and D. N may be divided into pairs of terms, at the same distance from either end, viz., 1+ x2", x2 + &c., with or without a middle term, each of which (after .1+:0") is, by example 4, less than that quantity; the middle term, if there be one, being less than (1 +x2"), in either case N<91"---4)--.1(1 + x2") . . . (1.) Again (example 6), D>n ",,,/xx3 . . . x2"-i >nx" . (2.) N n+1 ")• greater than - x + - x)' it is only necessary to multi- 2n ply up and reduce the result ; thus, n+ 1( , x + x--) 2n ks +x3+ . . . x2"-11+1 = 2n (2N - 1 -0") n+1 N < ra x (by 1) < N.
Whence the proposition.
Ex. 1. + y + z)2 < 3(x2 + y2 + z2), and generally, (x+y+z)"<3"--1(e+y"+ z"). (See Induction.) Ex. 2. (x+y) (y+ z) (z+x)>8xyz < 3-(x3 + + z3).
Er. 3. r+y4+z4)>xyz(x+y+z).
Ex. 4. (0+62+ c2)( + y2 + z2)>(ax + by+ cz)2 Ex. 5. The arithmetic mean of the pth powers of n positive quantities is greater than the pth power of their mean, and also greater than the mean of their Combinations p together.
Ex. G. (ax +by + cz)2 +(ax+ cy+bz)2 +(bx + ay +12 + (bx + cy + azr + ex + ay >Tb + ac + bc)(xy + .xz+ yz, <6 a2 b2 z2) (x2 + .y2 4. z2).
It will be noted that the numerical multiplier of the second term of the powers of a +x already obtained is the same as the index. It is easy to see that this law is general. To demonstrate the fact formally we employ the method of induction.
The argument may be divided into four distinct steps1. Inference; 2. Hypothesis ; 3. Comparison; 4. Conclusion.
The first step, inference, is the discovery of the probable existence of a law.
The second step, hypothesis, is the assumption that that law holds to a certain point, up to which the opponent to the argument may be presumed to admit it.
The third step consists in basing on this assumption the demonstration of the law to a stage beyond what the opponent was prepared to admit.
The fourth step argues that as the law starts fair, and advances beyond a point at which any opponent is prepared to admit its existence, it is necessarily true.
Ex. 1. To prove that (a +x)"=a"+na"-zx +, &c.
By multiplication we get (a + x)4 = al+ 4a3x+, ne.
Let it be granted that (a +x)-=e+nia"--lx+, where m is the extreme limit to which the opponent will admit of its truth.
By multiplying the equals by a + x, we get (a +4"-1-1= a"'+'+ mex + , + a""x + , a"+' + (m+1)amx + , &,c., i.e., if the law be admitted true for ni it is proved true foi + 1 ; in other words, at whatever point the opponent compels us to limit our assumption, we can advance one step higher by argument.
Now, the law is true for 4, it is proved true for 5 and being true for 5, it is proved trac for 6, and so on, ad Ex. 2. The sum of the cubes of the natmul numbers iF the square of the sum of the numbers, /2. 3)2 Let us assume that ( 13+23+, &c., +x3= k 2 )• If this be so, then by adding (x+ 1)3 we get 13+23+ + (x+1)3= (x(x2 +1))2 + (X + 1)3 1, ((x+ 1) (xx+2)\2 Hence, if the law be true for any one number x, it is also true for x + 1.
IV. But it is true for 2, for 3, for 4,146c.
Ex. 3. To prove the inequality, (x 2)2 < 3 (x2 + y9. + 29).
Let us assume that (x + y+z)"‹ 3"--'(x"` + y"+e), then by multiplication we get (x y + z)"-"< 3"-1(x"+1+ y'''+' + z'+' + +y"'x + cez+ +y"z+z"`y).
Now, inequality, example 3, gives x"'y + rx< x"" + y'+1, + rx + x"'z + ex +rz+z,"y< 2(2'4' + y"'+' +e"), and (x + y+ 2)"'+1‹ 3,"(e+1 y"`+' +.s."+9, i.e., the law is true for m+ 1, if true for m; but it is tine for 2, it is always true.
This rule is derived from the general rule for the signs in multiplication, by considering that the quotient must be such a quantity as, when multiplied by the divisor, shall produce the dividend, with its proper sign.
This definition of division is the same as that of a fraction; hence the quotient arising from the division of one quantity by another may be expressed by placing the dividend above a line, and the divisor below it ; but it may also be often real-cal to a more simple form by the following rules.
Rule. Divide the coefficient of each term of the dividend by the coefficient of the divisor, and expunge out of each term the letter or letters in the divisor : the result is the quotient.
Ex. Divide 16a3xy - 28a2xz2+ 4a2x3 by 4a"-x.
The process requires no explanation. It is founded on Laws II. and III., together with the rule of signs.
The quotient is 4ay - 7z2 + x2.
If the divisor and dividend be powers of the same quantity, the division will evidently be performed by subtracting the exponent of the divisor from that of the dividend. Thus a5, divided by a', has for a quotient a5-3= a'.
Case 2. When the divisor is simple, but not a factor of the dividend.
Rule. The quotient is expressed by a fraction, of which the numerator is the dividend, and the denominator the divisor.
Thus the quotient of 3ab3, divided by 2mbc, is the fraction • It will sometimes happen that the quotient found thus may be reduced to a more simple form, as shall be explained when we come to treat of fractions.
Case 3. When the divisor is compound.
Rule. The terms of the dividend are to be arranged in the order of the powers of some one of its letters, and those of the divisor according to the powers of the same letter. The operation is then carried on precisely as for division of numbers.
To illustrate this rule, let it be required to divide Sa2+ 2ab - 15b2 by 2a + 3b, the operation will stand thus : 2a + 3b)Sa2+ 2ab - 15b2(4a - 5b 8a3+12ab 10ab - 15b2 Here the terms of the divisor and dividend are arranged according to the powers of the quantity a. We now divide 8a2, the first term of the dividend, by 2a, the first term of the divisor ; and thus get 4a for Hui first term of the quotient. We next multiply the divisor by 4a, ant subtract the product Sa2+12ab from the dividend ; we get - 10ab - 1562 for a new dividend.
By proceeding in all respects as before, we find - 5b for the second term of the quotient, and no remainder: the operation is therefore finished, and the whole quotient is 4a - 5b.
The following examples will also serve to illustrate the manner of applying the rule.
3a - b)3a3- 12a2 - a2b + 10ab - 2b2(a2 - 4a + 2b 3a3 - a2b - 12a2 + 1 Oab - 12a2 + 4ab Gab - 2b2 Gab - 9b2 +x + x - x+ x2 + 52 - X3 + X3 Sometimes, as in this last example, the quotient will never terminate ; in such a case it may either be considered as an infinite series, the law according to which the terms are formed being in general sufficiently obvious; or the quotient may be completed as in arithmetical division, by annexing to it a fraction (with its proper sign), the numerator of which is the remainder, and denominator the divisor Thus the completed quotient, in last example, is If x be small compared with unity, the remainders, as we advance, continually become smaller and smaller. If, on the other hand, x be large compared with unity, the remainders continually become larger and larger. In this case the quotient is worthless. To obtain a quotient which shall be of any practical value, we must reverse the order of arrangement, putting - x +1 in place of 1 - x. The division then becomes - x +1)11 1 - x x +7, As it is generally the largest of the quantities that we desire to divide out, we observe that, in order to effect this, we have had to begin with that quantity. Hence the Rule - The terms of the divisor and dividend are to be arranged according to the powers of that letter which it is wished (if possible) to divide out.
We have spoken as if magnitude alone was the circumstance which should determine the precedence of the letters in a division. In the more advanced processes of algebra there are other circumstances which give precedence to certain letters, such, for example, as the fact that x may and often does stand for the phrase" quantity," whilst a stands for sonic determinate numerical quantity. This leads us to exhibit a proposition in division of the greatest value and most extensive application. It is as follows :- To prove this proposition we shall employ the following Amom : - If two expressions in x are identical in form and value, but one multiplied out farther than the other, we may write any numerical quantity we please in place of x in both, and the results will be equal.
For example, (x - 1)2 + (x - 1) - 3 is identical with x2- 2(x +1)+ x -1 ; and it is evident that if we write any number (say 1) for x, the results are the same in both.
We now proceed to prove the proposition.
Let the dividend be x" + pf-4 + qx"- 2, &c., where n is a whole number, and p, q, &c., positive or negative numerical quantities.
Let the quotient, when this is divided by x - a, be Q, the remainder, which does not contain x, It; then x"-l-px"--4+7.2?-2+ = Q(x - +It by the definition of Division.
Now this equality is in reality an identity in terms of the axiom. If then we write a in place of x, the results will be equal ; this gives a" + pa"-I + qa"-2 + &c. = Q.0 + R R, which is the proposition to be proved.
Ex. 1. If a be any whole number, x"- a" is divisible by x- a without remainder.
For the remainder, by the proposition, is - = 0 . Ex. 2. If a be an even number, x" - a" is divisible by x +a without remainder.
For the remainder is ( - a)" - a" = 0, since n is even.
Observe that the divisor here has to be changed to x - ( - a), so that - a stands in place of the a of the proposition.
Ex. 3. If n be an odd number, x" + a" is divisible by x +a without remainder.
For the remainder is ( - a)" + a" = 0, because n is odd.
Ex. 4. To prove that 4b2c2 - (52 + c2 - a -2 )is divisible by -a+b+c; and hence to resolve it into simple factors. Here the x - a of the proposition is replaced by a-(b+c) (the negative sign of the whole divisor being of no consequence).
To determine the remainder, therefore, we'write b+cin place of a in the dividend, or thing to be divided ; the result is, 47)20 _ + + = 0 , hence 452e- - (b2 +c2a2)2 is divisible by - a + b+ c.
Now, since the dividend contains only squares of a, and 1), and c, any change in the sign of a, or 5, or c, produces no change in the dividend. What we have just proved then becomes (putting - a for a) the following : - 4b2c2- (52 + c2 - a2)2 is divisible by a+ b +c .
This last becomes (putting - b for b, and then - c for 0: - 452,2_ (52+ e2 _ -2, a ) is divisible by a - b+ c, and by a + 1)- c. Hence finally, (b2 + c2 a2\2= ) (a + b + c) (- a + b + c) (a - b + (a + b - c).
The above example is a good exercise for the student. The result may be more simply arrived at by employing a proposition of very great value and frequent use - that the difference of the squares of two quantities is the product of the suns and difference of the quantities.
Ex. 5. To prove that (1 - a2) (1 - 52) (1 - c2) - (c + ab) (1) + ac) (a + bc) is divisible by 1 4- abc.
It is simpler here to write a single letter x for abc, whereby the given quantity becomes which is obviously under the form p - p, when - 1 is written for x, and is divisible by 1 +x.
Ex. 6. Prove that (x2- x +1) (x4 - x2 +1) (x2 - x4 +1) (x16 - xS + (x2' - x" +1) is the quotient of x"+x2"+ I by x2+ x +1 ; 25 being any power of 2.
The divisor (x2 + x + 1) being multiplied by x2 - x + 1 gives x4 + x2 + 1 ; which, being again multiplied by x4- x2+ 1, gives xs + x4+1 ; and so on to the end.
Additional Examples in Division.
Ex. 1. Divide 1 - 10x3+15x4- 6x5 by (1 - X)3.
We must first multiply out (1 - x)3, and then divide the given expression by the product, 1 - 3x + 3x2 - x'. The quotient is 1 + 3x + 6x2.
Ex. 2. Divide 65x2y2 - (x4 + 64y4) by x2 - 7xy - 8y2.
We must arrange dividend and divisor in terms of powers of one of the letters, say x; the division will then assume the form by the product of x2 - 1, x2 - 2. Here we observe that x2- 1 is the product of x +1, x- 1.
Now (Art. 20), x2+ 3x+ 2 is divisible by x+ 1, and x2- 5x + 4 by x - 1. Hence, if the product is divisible by x2- 1, x2 - 2, without remainder, the third factor, x4+ 5x2 - 14 must be divisible by x2 - 2, which is found to be the case. The quotient required is therefore the product of (x + 2) (x - 4) (x2 + 7) = x4 - - x2 - 14x - 56.
The last line being the sum column by column of the and division of powers with positive integral exponents or fndo three preceding lines. Now, as the upper of these three will apply in every ease, whether the exponents be positive lines contains term by term the quantities required, we or negative, integral or fractional, provided we assume as The first vertical column gives a; the second /3, and so on. | http://www.libraryindex.com/encyclopedia/pages/covwoyvdcq/algebra-fundamental-operations-quantities.html | 13 |
15 | This article describes the formula syntax and usage of the IF function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel.
The IF function returns one value if a condition you specify evaluates to TRUE, and another value if that condition evaluates to FALSE. For example, the formula =IF(A1>10,"Over 10","10 or less") returns "Over 10" if A1 is greater than 10, and "10 or less" if A1 is less than or equal to 10.
IF(logical_test, [value_if_true], [value_if_false])
The IF function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.):
- logical_test Required. Any value or expression that can be evaluated to TRUE or FALSE. For example, A10=100 is a logical expression; if the value in cell A10 is equal to 100, the expression evaluates to TRUE. Otherwise, the expression evaluates to FALSE. This argument can use any comparison calculation operator.
- value_if_true Optional. The value that you want to be returned if the logical_test argument evaluates to TRUE. For example, if the value of this argument is the text string "Within budget" and the logical_test argument evaluates to TRUE, the IF function returns the text "Within budget." If logical_test evaluates to TRUE and the value_if_true argument is omitted (that is, there is only a comma following the logical_test argument), the IF function returns 0 (zero). To display the word TRUE, use the logical value TRUE for the value_if_true argument.
- value_if_false Optional. The value that you want to be returned if the logical_test argument evaluates to FALSE. For example, if the value of this argument is the text string "Over budget" and the logical_test argument evaluates to FALSE, the IF function returns the text "Over budget." If logical_test evaluates to FALSE and the value_if_false argument is omitted, (that is, there is no comma following the value_if_true argument), the IF function returns the logical value FALSE. If logical_test evaluates to FALSE and the value of the value_if_false argument is blank (that is, there is only a comma following the value_if_true argument), the IF function returns the value 0 (zero).
Use the embedded workbook shown here to work with examples of this function. You can inspect and change existing formulas, enter your own formulas, and read further information about how the function works.
These examples use the IF function to test the values in a cell and either return a text string or perform a math operation and then return a value based on the result.
These examples use the IF function to test the values in a cell and return a text string based on the result.
These examples "nest" the IF function inside another IF function to return results based on a cell. If the result of the first IF function is True (>89), the first value specified ("A") is returned. If the result is False, the second IF function text to see if the value is >79 and, if so, returns "B," and additional IF functions test for a "C," "D," and "F" grade.
These examples show an alternative to "nesting" IF functions – the LOOKUP function – to return letter grades based on a cell value.
To work in-depth with this workbook, you can download it to your computer and open it in Excel. For more information, see the article Download an embedded workbook from SkyDrive and open it on your computer. | http://office.microsoft.com/en-us/excel-help/if-function-HP010342586.aspx | 13 |
15 | Ribonucleic acid (RNA) is a ubiquitous family of large biological molecules that perform multiple vital roles in the coding, decoding, regulation, and expression of genes. Together with DNA, RNA comprises the nucleic acids, which, along with proteins, constitute the three major macromolecules essential for all known forms of life. Like DNA, RNA is assembled as a chain of nucleotides, but is usually single-stranded. Cellular organisms use messenger RNA (mRNA) to convey genetic information (often notated using the letters G, A, U, and C for the nucleotides guanine, adenine, uracil and cytosine) that directs synthesis of specific proteins, while many viruses encode their genetic information using an RNA genome.
Some RNA molecules play an active role within cells by catalyzing biological reactions, controlling gene expression, or sensing and communicating responses to cellular signals. One of these active processes is protein synthesis, a universal function whereby mRNA molecules direct the assembly of proteins on ribosomes. This process uses transfer RNA (tRNA) molecules to deliver amino acids to the ribosome, where ribosomal RNA (rRNA) links amino acids together to form proteins.
Comparison with DNA
The chemical structure of RNA is very similar to that of DNA, but differs in three main ways:
- Unlike double-stranded DNA, RNA is a single-stranded molecule in many of its biological roles and has a much shorter chain of nucleotides. However, RNA can, by complementary base pairing, form intrastrand double helixes, as in tRNA.
- While DNA contains deoxyribose, RNA contains ribose (in deoxyribose there is no hydroxyl group attached to the pentose ring in the 2' position). These hydroxyl groups make RNA less stable than DNA because it is more prone to hydrolysis.
- The complementary base to adenine is not thymine, as it is in DNA, but rather uracil, which is an unmethylated form of thymine.
Like DNA, most biologically active RNAs, including mRNA, tRNA, rRNA, snRNAs, and other non-coding RNAs, contain self-complementary sequences that allow parts of the RNA to fold and pair with itself to form double helices. Analysis of these RNAs has revealed that they are highly structured. Unlike DNA, their structures do not consist of long double helices but rather collections of short helices packed together into structures akin to proteins. In this fashion, RNAs can achieve chemical catalysis, like enzymes. For instance, determination of the structure of the ribosome—an enzyme that catalyzes peptide bond formation—revealed that its active site is composed entirely of RNA.
Each nucleotide in RNA contains a ribose sugar, with carbons numbered 1' through 5'. A base is attached to the 1' position, in general, adenine (A), cytosine (C), guanine (G), or uracil (U). Adenine and guanine are purines, cytosine, and uracil are pyrimidines. A phosphate group is attached to the 3' position of one ribose and the 5' position of the next. The phosphate groups have a negative charge each at physiological pH, making RNA a charged molecule (polyanion). The bases may form hydrogen bonds between cytosine and guanine, between adenine and uracil and between guanine and uracil. However, other interactions are possible, such as a group of adenine bases binding to each other in a bulge, or the GNRA tetraloop that has a guanine–adenine base-pair.
An important structural feature of RNA that distinguishes it from DNA is the presence of a hydroxyl group at the 2' position of the ribose sugar. The presence of this functional group causes the helix to adopt the A-form geometry rather than the B-form most commonly observed in DNA. This results in a very deep and narrow major groove and a shallow and wide minor groove. A second consequence of the presence of the 2'-hydroxyl group is that in conformationally flexible regions of an RNA molecule (that is, not involved in formation of a double helix), it can chemically attack the adjacent phosphodiester bond to cleave the backbone.
RNA is transcribed with only four bases (adenine, cytosine, guanine and uracil), but these bases and attached sugars can be modified in numerous ways as the RNAs mature. Pseudouridine (Ψ), in which the linkage between uracil and ribose is changed from a C–N bond to a C–C bond, and ribothymidine (T) are found in various places (the most notable ones being in the TΨC loop of tRNA). Another notable modified base is hypoxanthine, a deaminated adenine base whose nucleoside is called inosine (I). Inosine plays a key role in the wobble hypothesis of the genetic code.
There are nearly 100 other naturally occurring modified nucleosides, of which pseudouridine and nucleosides with 2'-O-methylribose are the most common. The specific roles of many of these modifications in RNA are not fully understood. However, it is notable that, in ribosomal RNA, many of the post-transcriptional modifications occur in highly functional regions, such as the peptidyl transferase center and the subunit interface, implying that they are important for normal function.
The functional form of single-stranded RNA molecules, just like proteins, frequently requires a specific tertiary structure. The scaffold for this structure is provided by secondary structural elements that are hydrogen bonds within the molecule. This leads to several recognizable "domains" of secondary structure like hairpin loops, bulges, and internal loops. Since RNA is charged, metal ions such as Mg2+ are needed to stabilise many secondary and tertiary structures.
Synthesis of RNA is usually catalyzed by an enzyme—RNA polymerase—using DNA as a template, a process known as transcription. Initiation of transcription begins with the binding of the enzyme to a promoter sequence in the DNA (usually found "upstream" of a gene). The DNA double helix is unwound by the helicase activity of the enzyme. The enzyme then progresses along the template strand in the 3’ to 5’ direction, synthesizing a complementary RNA molecule with elongation occurring in the 5’ to 3’ direction. The DNA sequence also dictates where termination of RNA synthesis will occur.
There are also a number of RNA-dependent RNA polymerases that use RNA as their template for synthesis of a new strand of RNA. For instance, a number of RNA viruses (such as poliovirus) use this type of enzyme to replicate their genetic material. Also, RNA-dependent RNA polymerase is part of the RNA interference pathway in many organisms.
Types of RNA
Messenger RNA (mRNA) is the RNA that carries information from DNA to the ribosome, the sites of protein synthesis (translation) in the cell. The coding sequence of the mRNA determines the amino acid sequence in the protein that is produced. Many RNAs do not code for protein however (about 97% of the transcriptional output is non-protein-coding in eukaryotes ).
These so-called non-coding RNAs ("ncRNA") can be encoded by their own genes (RNA genes), but can also derive from mRNA introns. The most prominent examples of non-coding RNAs are transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. There are also non-coding RNAs involved in gene regulation, RNA processing and other roles. Certain RNAs are able to catalyse chemical reactions such as cutting and ligating other RNA molecules, and the catalysis of peptide bond formation in the ribosome; these are known as ribozymes.
In translation
Messenger RNA (mRNA) carries information about a protein sequence to the ribosomes, the protein synthesis factories in the cell. It is coded so that every three nucleotides (a codon) correspond to one amino acid. In eukaryotic cells, once precursor mRNA (pre-mRNA) has been transcribed from DNA, it is processed to mature mRNA. This removes its introns—non-coding sections of the pre-mRNA. The mRNA is then exported from the nucleus to the cytoplasm, where it is bound to ribosomes and translated into its corresponding protein form with the help of tRNA. In prokaryotic cells, which do not have nucleus and cytoplasm compartments, mRNA can bind to ribosomes while it is being transcribed from DNA. After a certain amount of time the message degrades into its component nucleotides with the assistance of ribonucleases.
Transfer RNA (tRNA) is a small RNA chain of about 80 nucleotides that transfers a specific amino acid to a growing polypeptide chain at the ribosomal site of protein synthesis during translation. It has sites for amino acid attachment and an anticodon region for codon recognition that binds to a specific sequence on the messenger RNA chain through hydrogen bonding.
Ribosomal RNA (rRNA) is the catalytic component of the ribosomes. Eukaryotic ribosomes contain four different rRNA molecules: 18S, 5.8S, 28S and 5S rRNA. Three of the rRNA molecules are synthesized in the nucleolus, and one is synthesized elsewhere. In the cytoplasm, ribosomal RNA and protein combine to form a nucleoprotein called a ribosome. The ribosome binds mRNA and carries out protein synthesis. Several ribosomes may be attached to a single mRNA at any time. Nearly all the RNA found in a typical eukaryotic cell is rRNA.
Regulatory RNAs
Several types of RNA can downregulate gene expression by being complementary to a part of an mRNA or a gene's DNA. MicroRNAs (miRNA; 21-22 nt) are found in eukaryotes and act through RNA interference (RNAi), where an effector complex of miRNA and enzymes can cleave complementary mRNA, block the mRNA from being translated, or accelerate its degradation.
While small interfering RNAs (siRNA; 20-25 nt) are often produced by breakdown of viral RNA, there are also endogenous sources of siRNAs. siRNAs act through RNA interference in a fashion similar to miRNAs. Some miRNAs and siRNAs can cause genes they target to be methylated, thereby decreasing or increasing transcription of those genes. Animals have Piwi-interacting RNAs (piRNA; 29-30 nt) that are active in germline cells and are thought to be a defense against transposons and play a role in gametogenesis.
Many prokaryotes have CRISPR RNAs, a regulatory system similar to RNA interference. Antisense RNAs are widespread; most downregulate a gene, but a few are activators of transcription. One way antisense RNA can act is by binding to an mRNA, forming double-stranded RNA that is enzymatically degraded. There are many long noncoding RNAs that regulate genes in eukaryotes, one such RNA is Xist, which coats one X chromosome in female mammals and inactivates it.
An mRNA may contain regulatory elements itself, such as riboswitches, in the 5' untranslated region or 3' untranslated region; these cis-regulatory elements regulate the activity of that mRNA. The untranslated regions can also contain elements that regulate other genes.
In RNA processing
Many RNAs are involved in modifying other RNAs. Introns are spliced out of pre-mRNA by spliceosomes, which contain several small nuclear RNAs (snRNA), or the introns can be ribozymes that are spliced by themselves. RNA can also be altered by having its nucleotides modified to other nucleotides than A, C, G and U. In eukaryotes, modifications of RNA nucleotides are in general directed by small nucleolar RNAs (snoRNA; 60-300 nt), found in the nucleolus and cajal bodies. snoRNAs associate with enzymes and guide them to a spot on an RNA by basepairing to that RNA. These enzymes then perform the nucleotide modification. rRNAs and tRNAs are extensively modified, but snRNAs and mRNAs can also be the target of base modification. RNA can also be methylated.
RNA genomes
Like DNA, RNA can carry genetic information. RNA viruses have genomes composed of RNA that encodes a number of proteins. The viral genome is replicated by some of those proteins, while other proteins protect the genome as the virus particle moves to a new host cell. Viroids are another group of pathogens, but they consist only of RNA, do not encode any protein and are replicated by a host plant cell's polymerase.
In reverse transcription
Reverse transcribing viruses replicate their genomes by reverse transcribing DNA copies from their RNA; these DNA copies are then transcribed to new RNA. Retrotransposons also spread by copying DNA and RNA from one another, and telomerase contains an RNA that is used as template for building the ends of eukaryotic chromosomes.
Double-stranded RNA
Double-stranded RNA (dsRNA) is RNA with two complementary strands, similar to the DNA found in all cells. dsRNA forms the genetic material of some viruses (double-stranded RNA viruses). Double-stranded RNA such as viral RNA or siRNA can trigger RNA interference in eukaryotes, as well as interferon response in vertebrates.
Key discoveries in RNA biology
Research on RNA has led to many important biological discoveries and numerous Nobel Prizes. Nucleic acids were discovered in 1868 by Friedrich Miescher, who called the material 'nuclein' since it was found in the nucleus. It was later discovered that prokaryotic cells, which do not have a nucleus, also contain nucleic acids. The role of RNA in protein synthesis was suspected already in 1939. Severo Ochoa won the 1959 Nobel Prize in Medicine (shared with Arthur Kornberg) after he discovered an enzyme that can synthesize RNA in the laboratory. However, the enzyme discovered by Ochoa (polynucleotide phosphorylase) was later shown to be responsible for RNA degradation, not RNA synthesis.
The sequence of the 77 nucleotides of a yeast tRNA was found by Robert W. Holley in 1965, winning Holley the 1968 Nobel Prize in Medicine (shared with Har Gobind Khorana and Marshall Nirenberg). In 1967, Carl Woese hypothesized that RNA might be catalytic and suggested that the earliest forms of life (self-replicating molecules) could have relied on RNA both to carry genetic information and to catalyze biochemical reactions—an RNA world.
During the early 1970s retroviruses and reverse transcriptase were discovered, showing for the first time that enzymes could copy RNA into DNA (the opposite of the usual route for transmission of genetic information). For this work, David Baltimore, Renato Dulbecco and Howard Temin were awarded a Nobel Prize in 1975. In 1976, Walter Fiers and his team determined the first complete nucleotide sequence of an RNA virus genome, that of bacteriophage MS2.
In 1977, introns and RNA splicing were discovered in both mammalian viruses and in cellular genes, resulting in a 1993 Nobel to Philip Sharp and Richard Roberts. Catalytic RNA molecules (ribozymes) were discovered in the early 1980s, leading to a 1989 Nobel award to Thomas Cech and Sidney Altman. In 1990 it was found in petunia that introduced genes can silence similar genes of the plant's own, now known to be a result of RNA interference.
At about the same time, 22 nt long RNAs, now called microRNAs, were found to have a role in the development of C. elegans. Studies on RNA interference gleaned a Nobel Prize for Andrew Fire and Craig Mello in 2006, and another Nobel was awarded for studies on transcription of RNA to Roger Kornberg in the same year. The discovery of gene regulatory RNAs has led to attempts to develop drugs made of RNA, such as siRNA, to silence genes.
- Berg JM, Tymoczko JL, Stryer L (2002). Biochemistry (5th ed.). WH Freeman and Company. pp. 118–19, 781–808. ISBN 0-7167-4684-0. OCLC 179705944 48055706 59502128.
- I. Tinoco and C. Bustamante (1999). "How RNA folds". J. Mol. Biol. 293 (2): 271–281. doi:10.1006/jmbi.1999.3001. PMID 10550208Papercore summary http://papercore.org/Tinoco1999
- Higgs PG (2000). "RNA secondary structure: physical and computational aspects". Quarterly Reviews of Biophysics 33 (3): 199–253. doi:10.1017/S0033583500003620. PMID 11191843.
- Nissen P, Hansen J, Ban N, Moore PB, Steitz TA (2000). "The structural basis of ribosome activity in peptide bond synthesis". Science 289 (5481): 920–30. Bibcode:2000Sci...289..920N. doi:10.1126/science.289.5481.920. PMID 10937990.
- Lee JC, Gutell RR (2004). "Diversity of base-pair conformations and their occurrence in rRNA structure and RNA structural motifs". J. Mol. Biol. 344 (5): 1225–49. doi:10.1016/j.jmb.2004.09.072. PMID 15561141.
- Barciszewski J, Frederic B, Clark C (1999). RNA biochemistry and biotechnology. Springer. pp. 73–87. ISBN 0-7923-5862-7. OCLC 52403776.
- Salazar M, Fedoroff OY, Miller JM, Ribeiro NS, Reid BR (1992). "The DNA strand in DNAoRNA hybrid duplexes is neither B-form nor A-form in solution". Biochemistry 32 (16): 4207–15. doi:10.1021/bi00067a007. PMID 7682844.
- Hermann T, Patel DJ (2000). "RNA bulges as architectural and recognition motifs". Structure 8 (3): R47–R54. doi:10.1016/S0969-2126(00)00110-6. PMID 10745015.
- Mikkola S, Nurmi K, Yousefi-Salakdeh E, Strömberg R, Lönnberg H (1999). "The mechanism of the metal ion promoted cleavage of RNA phosphodiester bonds involves a general acid catalysis by the metal aquo ion on the departure of the leaving group". Perkin transactions 2 (8): 1619–26. doi:10.1039/a903691a.
- Jankowski JAZ, Polak JM (1996). Clinical gene analysis and manipulation: Tools, techniques and troubleshooting. Cambridge University Press. p. 14. ISBN 0-521-47896-0. OCLC 33838261.
- Yu Q, Morrow CD (2001). "Identification of critical elements in the tRNA acceptor stem and TΨC loop necessary for human immunodeficiency virus type 1 infectivity". J Virol. 75 (10): 4902–6. doi:10.1128/JVI.75.10.4902-4906.2001. PMC 114245. PMID 11312362.
- Elliott MS, Trewyn RW (1983). "Inosine biosynthesis in transfer RNA by an enzymatic insertion of hypoxanthine". J. Biol. Chem. 259 (4): 2407–10. PMID 6365911.
- Söll D, RajBhandary U (1995). TRNA: Structure, biosynthesis, and function. ASM Press. p. 165. ISBN 1-55581-073-X. OCLC 183036381 30663724.
- Kiss T (2001). "Small nucleolar RNA-guided post-transcriptional modification of cellular RNAs". The EMBO Journal 20 (14): 3617–22. doi:10.1093/emboj/20.14.3617. PMC 125535. PMID 11447102.
- King TH, Liu B, McCully RR, Fournier MJ (2002). "Ribosome structure and activity are altered in cells lacking snoRNPs that form pseudouridines in the peptidyl transferase center". Molecular Cell 11 (2): 425–35. doi:10.1016/S1097-2765(03)00040-6. PMID 12620230.
- Mathews DH, Disney MD, Childs JL, Schroeder SJ, Zuker M, Turner DH (2004). "Incorporating chemical modification constraints into a dynamic programming algorithm for prediction of RNA secondary structure". Proc. Natl. Acad. Sci. USA 101 (19): 7287–92. Bibcode:2004PNAS..101.7287M. doi:10.1073/pnas.0401799101. PMC 409911. PMID 15123812.
- Tan ZJ, Chen SJ (2008). "Salt dependence of nucleic acid hairpin stability". Biophys. J. 95 (2): 738–52. Bibcode:2008BpJ....95..738T. doi:10.1529/biophysj.108.131524. PMC 2440479. PMID 18424500.
- Nudler E, Gottesman ME (2002). "Transcription termination and anti-termination in E. coli". Genes to Cells 7 (8): 755–68. doi:10.1046/j.1365-2443.2002.00563.x. PMID 12167155.
- Jeffrey L Hansen, Alexander M Long, Steve C Schultz (1997). "Structure of the RNA-dependent RNA polymerase of poliovirus". Structure 5 (8): 1109–22. doi:10.1016/S0969-2126(97)00261-X. PMID 9309225.
- Ahlquist P (2002). "RNA-Dependent RNA Polymerases, Viruses, and RNA Silencing". Science 296 (5571): 1270–73. Bibcode:2002Sci...296.1270A. doi:10.1126/science.1069132. PMID 12016304.
- Cooper GC, Hausman RE (2004). The Cell: A Molecular Approach (3rd ed.). Sinauer. pp. 261–76, 297, 339–44. ISBN 0-87893-214-3. OCLC 174924833 52121379 52359301 56050609.
- Mattick JS, Gagen MJ (1 September 2001). "The evolution of controlled multitasked gene networks: the role of introns and other noncoding RNAs in the development of complex organisms". Mol. Biol. Evol. 18 (9): 1611–30. doi:10.1093/oxfordjournals.molbev.a003951. PMID 11504843.
- Mattick, JS (2001). "Noncoding RNAs: the architects of eukaryotic complexity". EMBO Reports 2 (11): 986–91. doi:10.1093/embo-reports/kve230. PMC 1084129. PMID 11713189.
- Mattick JS (October 2003). "Challenging the dogma: the hidden layer of non-protein-coding RNAs in complex organisms". BioEssays : News and Reviews in Molecular, Cellular and Developmental Biology 25 (10): 930–9. doi:10.1002/bies.10332. PMID 14505360.
- Mattick JS (October 2004). "The hidden genetic program of complex organisms". Scientific American 291 (4): 60–7. doi:10.1038/scientificamerican1004-60. PMID 15487671.[dead link]
- Wirta W (2006). Mining the transcriptome – methods and applications. Stockholm: School of Biotechnology, Royal Institute of Technology. ISBN 91-7178-436-5. OCLC 185406288.
- Rossi JJ (2004). "Ribozyme diagnostics comes of age". Chemistry & Biology 11 (7): 894–95. doi:10.1016/j.chembiol.2004.07.002. PMID 15271347.
- Gueneau de Novoa P, Williams KP (2004). "The tmRNA website: reductive evolution of tmRNA in plastids and other endosymbionts". Nucleic Acids Res. 32 (Database issue): D104–8. doi:10.1093/nar/gkh102. PMC 308836. PMID 14681369.
- Wu L, Belasco JG (January 2008). "Let me count the ways: mechanisms of gene regulation by miRNAs and siRNAs". Mol. Cell 29 (1): 1–7. doi:10.1016/j.molcel.2007.12.010. PMID 18206964.
- Matzke MA, Matzke AJM (2004). "Planting the seeds of a new paradigm". PLoS Biology 2 (5): e133. doi:10.1371/journal.pbio.0020133. PMC 406394. PMID 15138502.
- Vazquez F, Vaucheret H, Rajagopalan R, Lepers C, Gasciolli V, Mallory AC, Hilbert J, Bartel DP, Crété P (2004). "Endogenous trans-acting siRNAs regulate the accumulation of Arabidopsis mRNAs". Molecular Cell 16 (1): 69–79. doi:10.1016/j.molcel.2004.09.028. PMID 15469823.
- Watanabe T, Totoki Y, Toyoda A, et al. (May 2008). "Endogenous siRNAs from naturally formed dsRNAs regulate transcripts in mouse oocytes". Nature 453 (7194): 539–43. Bibcode:2008Natur.453..539W. doi:10.1038/nature06908. PMID 18404146.
- Sontheimer EJ, Carthew RW (July 2005). "Silence from within: endogenous siRNAs and miRNAs". Cell 122 (1): 9–12. doi:10.1016/j.cell.2005.06.030. PMID 16009127.
- Doran G (2007). "RNAi – Is one suffix sufficient?". Journal of RNAi and Gene Silencing 3 (1): 217–19.
- Pushparaj PN, Aarthi JJ, Kumar SD, Manikandan J (2008). "RNAi and RNAa — The Yin and Yang of RNAome". Bioinformation 2 (6): 235–7. doi:10.6026/97320630002235. PMC 2258431. PMID 18317570.
- Horwich MD, Li C Matranga C, Vagin V, Farley G, Wang P, Zamore PD (2007). "The Drosophila RNA methyltransferase, DmHen1, modifies germline piRNAs and single-stranded siRNAs in RISC". Current Biology 17 (14): 1265–72. doi:10.1016/j.cub.2007.06.030. PMID 17604629.
- Girard A, Sachidanandam R, Hannon GJ, Carmell MA (2006). "A germline-specific class of small RNAs binds mammalian Piwi proteins". Nature 442 (7099): 199–202. Bibcode:2006Natur.442..199G. doi:10.1038/nature04917. PMID 16751776.
- Horvath P, Barrangou R (2010). "CRISPR/Cas, the Immune System of Bacteria and Archaea". Science 327 (5962): 167–70. Bibcode:2010Sci...327..167H. doi:10.1126/science.1179555. PMID 20056882.
- Wagner EG, Altuvia S, Romby P (2002). "Antisense RNAs in bacteria and their genetic elements". Adv Genet. Advances in Genetics 46: 361–98. doi:10.1016/S0065-2660(02)46013-0. ISBN 9780120176465. PMID 11931231.
- Gilbert SF (2003). Developmental Biology (7th ed.). Sinauer. pp. 101–3. ISBN 0-87893-258-5. OCLC 154656422 154663147 174530692 177000492 177316159 51544170 54743254 59197768 61404850 66754122.
- Amaral PP, Mattick JS (October 2008). "Noncoding RNA in development". Mammalian genome : official journal of the International Mammalian Genome Society 19 (7–8): 454–92. doi:10.1007/s00335-008-9136-7. PMID 18839252.
- Heard E, Mongelard F, Arnaud D, Chureau C, Vourc'h C, Avner P (1999). "Human XIST yeast artificial chromosome transgenes show partial X inactivation center function in mouse embryonic stem cells". Proc. Natl. Acad. Sci. USA 96 (12): 6841–46. Bibcode:1999PNAS...96.6841H. doi:10.1073/pnas.96.12.6841. PMC 22003. PMID 10359800.
- Batey RT (2006). "Structures of regulatory elements in mRNAs". Curr. Opin. Struct. Biol. 16 (3): 299–306. doi:10.1016/j.sbi.2006.05.001. PMID 16707260.
- Scotto L, Assoian RK (June 1993). "A GC-rich domain with bifunctional effects on mRNA and protein levels: implications for control of transforming growth factor beta 1 expression". Mol. Cell. Biol. 13 (6): 3588–97. PMC 359828. PMID 8497272.
- Steitz TA, Steitz JA (1993). "A general two-metal-ion mechanism for catalytic RNA". Proc. Natl. Acad. Sci. U.S.A. 90 (14): 6498–502. Bibcode:1993PNAS...90.6498S. doi:10.1073/pnas.90.14.6498. PMC 46959. PMID 8341661.
- Xie J, Zhang M, Zhou T, Hua X, Tang L, Wu W (2007). "Sno/scaRNAbase: a curated database for small nucleolar RNAs and cajal body-specific RNAs". Nucleic Acids Res. 35 (Database issue): D183–7. doi:10.1093/nar/gkl873. PMC 1669756. PMID 17099227.
- Omer AD, Ziesche S, Decatur WA, Fournier MJ, Dennis PP (2003). "RNA-modifying machines in archaea". Molecular Microbiology 48 (3): 617–29. doi:10.1046/j.1365-2958.2003.03483.x. PMID 12694609.
- Cavaillé J, Nicoloso M, Bachellerie JP (1996). "Targeted ribose methylation of RNA in vivo directed by tailored antisense RNA guides". Nature 383 (6602): 732–5. Bibcode:1996Natur.383..732C. doi:10.1038/383732a0. PMID 8878486.
- Kiss-László Z, Henry Y, Bachellerie JP, Caizergues-Ferrer M, Kiss T (1996). "Site-specific ribose methylation of preribosomal RNA: a novel function for small nucleolar RNAs". Cell 85 (7): 1077–88. doi:10.1016/S0092-8674(00)81308-2. PMID 8674114.
- Daròs JA, Elena SF, Flores R (2006). "Viroids: an Ariadne's thread into the RNA labyrinth". EMBO Rep. 7 (6): 593–8. doi:10.1038/sj.embor.7400706. PMC 1479586. PMID 16741503.
- Kalendar R, Vicient CM, Peleg O, Anamthawat-Jonsson K, Bolshoy A, Schulman AH (2004). "Large retrotransposon derivatives: abundant, conserved but nonautonomous retroelements of barley and related genomes". Genetics 166 (3): 1437–50. doi:10.1534/genetics.166.3.1437. PMC 1470764. PMID 15082561.
- Podlevsky JD, Bley CJ, Omana RV, Qi X, Chen JJ (2008). "The telomerase database". Nucleic Acids Res. 36 (Database issue): D339–43. doi:10.1093/nar/gkm700. PMC 2238860. PMID 18073191.
- Blevins T et al.; Rajeswaran, R.; Shivaprasad, P. V.; Beknazariants, D.; Si-Ammour, A.; Park, H.-S.; Vazquez, F.; Robertson, D. et al. (2006). "Four plant Dicers mediate viral small RNA biogenesis and DNA virus induced silencing". Nucleic Acids Res 34 (21): 6233–46. doi:10.1093/nar/gkl886. PMC 1669714. PMID 17090584.
- Jana S, Chakraborty C, Nandi S, Deb JK (2004). "RNA interference: potential therapeutic targets". Appl. Microbiol. Biotechnol. 65 (6): 649–57. doi:10.1007/s00253-004-1732-1. PMID 15372214.
- Schultz U, Kaspers B, Staeheli P (2004). "The interferon system of non-mammalian vertebrates". Dev. Comp. Immunol. 28 (5): 499–508. doi:10.1016/j.dci.2003.09.009. PMID 15062646.
- Whitehead, K. A.; Dahlman, J. E.; Langer, R. S.; Anderson, D. G. (2011). "Silencing or Stimulation? SiRNA Delivery and the Immune System". Annual Review of Chemical and Biomolecular Engineering 2: 77–96. doi:10.1146/annurev-chembioeng-061010-114133. PMID 22432611.
- Dahm R (2005). "Friedrich Miescher and the discovery of DNA". Developmental Biology 278 (2): 274–88. doi:10.1016/j.ydbio.2004.11.028. PMID 15680349.
- Caspersson T, Schultz J (1939). "Pentose nucleotides in the cytoplasm of growing tissues". Nature 143 (3623): 602–3. Bibcode:1939Natur.143..602C. doi:10.1038/143602c0.
- Ochoa S (1959). "Enzymatic synthesis of ribonucleic acid". Nobel Lecture.
- Holley RW et al.; Apgar, J.; Everett, G. A.; Madison, J. T.; Marquisee, M.; Merrill, S. H.; Penswick, J. R.; Zamir, A. (1965). "Structure of a ribonucleic acid". Science 147 (3664): 1462–65. Bibcode:1965Sci...147.1462H. doi:10.1126/science.147.3664.1462. PMID 14263761.
- Siebert S (2006). "Common sequence structure properties and stable regions in RNA secondary structures". Dissertation, Albert-Ludwigs-Universität, Freiburg im Breisgau. p. 1.[dead link]
- Szathmáry E (1999). "The origin of the genetic code: amino acids as cofactors in an RNA world". Trends Genet. 15 (6): 223–9. doi:10.1016/S0168-9525(99)01730-8. PMID 10354582.
- Fiers W et al.; Ysebaert, R.; Duerinck, F.; Haegeman, G.; Iserentant, D.; Merregaert, J.; Min Jou, W.; Molemans, F. et al. (1976). "Complete nucleotide-sequence of bacteriophage MS2-RNA: primary and secondary structure of replicase gene". Nature 260 (5551): 500–7. Bibcode:1976Natur.260..500F. doi:10.1038/260500a0. PMID 1264203.
- Napoli C, Lemieux C, Jorgensen R (1990). "Introduction of a chimeric chalcone synthase gene into petunia results in reversible co-suppression of homologous genes in trans". Plant Cell 2 (4): 279–89. doi:10.1105/tpc.2.4.279. PMC 159885. PMID 12354959.
- Dafny-Yelin M, Chung SM, Frankman EL, Tzfira T (December 2007). "pSAT RNA interference vectors: a modular series for multiple gene down-regulation in plants". Plant Physiol. 145 (4): 1272–81. doi:10.1104/pp.107.106062. PMC 2151715. PMID 17766396.
- Ruvkun G (2001). "Glimpses of a tiny RNA world". Science 294 (5543): 797–99. doi:10.1126/science.1066315. PMID 11679654.
- Fichou Y, Férec C (2006). "The potential of oligonucleotides for therapeutic applications". Trends in Biotechnology 24 (12): 563–70. doi:10.1016/j.tibtech.2006.10.003. PMID 17045686.
- RNA World website Link collection (structures, sequences, tools, journals)
- Nucleic Acid Database Images of DNA, RNA and complexes.
- EteRNA a game forming RNA by pairing bases. | http://en.wikipedia.org/wiki/RNA_genome | 13 |
414 | Critical thinking measures a person's ability to carefully consider research, information and opinions, evaluate the available evidence and form his own conclusions. Good critical thinkers can clearly argue their positions as well as detect logical problems in other people's ideas. Critical thinking encourages children to question received wisdom rather than learn passively and can encourage creativity, problem-solving skills, strong writing and research skills and the ability to develop their own thoughts and ideas. Rather than teaching critical thinking directly, there are several ways you can encourage your middle school or high school student to develop critical thinking skills.
Critical thinking in math refers to the ability to evaluate the presented mathematical problems and think about the best way to solve them. This is a vital skill for math students, as it allows them to tackle their math homework and quizzes efficiently. Critical thinking leads to success, which bolsters confidence and results in better grades for the student.
Making a habit of critical thinking before communication involves more than just examining and changing the way you think. Developing a habit of critical thinking in order to become a more effective communicator requires you to regularly engage in a certain pattern of thinking before you speak until it becomes automatic. Everyone has habitual thought patterns that can be dysfunctional, leading to increased mental distress and negativity, which impacts your overall viewpoint. In order to understand the way you think, you must clearly understand what you want to communicate before you make your argument.
Manipulative behavior is often difficult to pinpoint because it relies on the essential ambiguity of human communication. The psychoanalyst Jacques Lacan famously re-interpreted Freud's original insights using contemporary linguistics. He noted the tendency of words to bring to mind more than merely one concept or idea at once. The word, or signifier, "slides" over the intended meaning, unintentionally evoking other meanings; and since Lacan also includes all other means of representing ideas in the term "signifier" (facial expressions, tones of voice, blushing, going pale, physical gestures, etc.), human communication is full of additional meanings beside the overtly intended one. Manipulative…
With the release of Bloom's Taxonomy in 1956, education in America began focusing on the development of students' higher order thinking skills. With the successful use of the higher order thinking skills, unfamiliar problems or dilemmas are solved with explanations or decisions based within the context of previous knowledge. Knowledge you have gained previously used to deduce an answer, think critically, reflect or create, indicates the use of higher order thinking skills.
Critical thinking involves thinking clearly and questioning the world. According to Lisa Mabe, Director, Early Childhood Education, Surrey Community College; critical thinking should begin before students reach college. From an early stage in life, young children can think for themselves and learn anything taught to them. The inquiring mind of a child is a precious resource, often driven to become passive and non-questioning because schools fail to teach children critical thinking.
Critical thinking is essentially the process of logically analyzing a situation. It's a skill that becomes particularly helpful when facing a problem. While some people are naturally more adept at problem-solving than others, anyone can learn the skill of critical thinking. Every time you solve a problem, you have exercised your own critical-thinking abilities. By becoming aware of what critical thinking is and applying its methodology to your everyday problems, you will improve your critical-thinking skills.
Critical thinking is defined variously as the synthesis of definition and concept, thought disciplined by reason and evidence, skill in the analysis and synthesis of information and concepts, and reasonable reflection. Critical thinking is important not only to transmit these skills to students, but for the teacher to evaluate and adjust her own performance as an educator.
At the time of publication, the National Association for Gifted Children estimates that there are about 3 million gifted children in the United States, or about 6 percent of the elementary school population. Tight budgets and staffing cuts mean that these gifted children, especially at early ages, may not be able to receive the proper attention. However, there are resources available to parents and teachers to help challenge and nurture the higher-thinking of gifted kindergarten students.
Reason and critical thinking seem to be integrally linked. Scholar B.K. Beyer defined critical thinking as the ability to make reasoned judgments. Critical thinking is both rigorous -- not yielding to emotional arguments -- and is focused on the validity of information. With these skills, people can make decisions, come up with new ideas and avoid being fooled by others.
Critical thinking is a process of testing an argument or observation for validity. By breaking a concept down into a series of premises and conclusions, you examine the causal relationship between elements of the observable world and aspects of reality you may not yet have considered. Thinking critically and examining beliefs is a basic survival skill. Without the ability to observe, question, learn and draw sensible conclusions about the world around them, the ancestors of modern humans may never have survived.
Strategic thinking is the practice of developing and analyzing every decision made after taking into consideration the present and future conditions, the desired outcome and the expected results. Strategic thinking involves solving problems and challenges that arise by assessing these in a broader context. Critical skills are required in formulating an idea of what you aim at achieving in the future, and working towards it.
Developing critical thinking skills begins with your assessment of your decision-making process. Critical thinking is the process of evaluating your decisions and ideals, making informed decisions and learning from your mistakes. You can develop these skills by evaluating your thought process, determining the strength of your current critical thinking skills and working to improve those abilities with practice and studied techniques. Critical thinking starts the moment you admit you do not have the answer to a question and then begin working to find the answer.
Critical thinking is characterized by an ability to accurately and consistently synthesize information, according to the educational nonprofit Foundation for Critical Thinking. Teachers can introduce critical thinking principles in the classroom by designing assignments that challenge students to apply abstract ideas to concrete guidelines. Requiring students to explain methodology or evaluate a study using the scientific method can also cultivate critical thinking. Establish a classroom environment that encourages students to articulate ideas and questions in response to new material.
Critical thinking involves analyzing or evaluating information using depth and logic, considering other points of view, identifying bias and recognizing assumptions. An individual gathers all needed information and makes a decision or assumption using all or some of the factors of critical thinking. Colleges and universities encourage critical thinking during higher level learning. Six styles or strategies of critical thinking include contextualizing, annotating, questioning, considering the full argument, reflecting and synthesizing.
Critical thinking requires the ability to look beyond stated positions and ideology to carefully consider how assumptions and understandings are formed. As a teacher, it is your responsibility to foster this ability in your students. Students must be taught to look at things critically and evaluate parts of a discussion or life situation. Teaching them to recognize the strengths and weaknesses of arguments will help them make better choices for themselves both in and out of the classroom.
An interactive whiteboard is an excellent educational aid that can be used to demonstrate concepts ranging from simple to complex, as well as a tool that can help teach and enforce critical thinking skills in students. Critical thinking is the act of evaluating a concept using reasoning and rationality founded in research and facts, according to Critical Reading.com. This skill is something that will be an asset to students not only in the school environment, but also in the work place and their future careers.
You rely on your critical and creative thinking skills so often, you might not even be aware of using them, yet they are vital to growth and success at work and at home. Whenever you’re engaged in effective problem-solving, you’re using your creativity to generate possible solutions and your critical thinking to evaluate their usefulness.
Critical listening allows you to gather all of the information being presented to you in the argument, then you can assess the argument in a concise, focused and logical way. But when listening critically, you are only evaluating and judging the context of the argument, not the person you're listening to. Critical listening is quite different from everyday listening and thinking in a number of different ways.
Critical thinking is the ability to rationalize thoughts, ideas and issues by using logic to determine the best response. Dominated by a thought process rooted in analysis, critical thinking involves questioning ideas and only moving past issues once a logical answer has been derived. Because of the inquisitive nature associated with critical thinking, random thoughts can help strengthen critical thinking skills when used as a way to develop a train of thought.
Chess is a game that can greatly improve a child's critical thinking skills. There are many ways to introduce a child to the game of chess. Parents can help a child learn at home, and there also are programs that bring chess into the classroom. Academic Chess, for instance, offers free in-class lessons and also an afternoon chess program for students who are interested in learning the game.
Any time you read literary materials or experience something that requires you to comprehend it, you employ a variety of thinking skills. Thinking skills relate to the way in which you process and understand information, and you employ specific thinking skills based on what you wish to gain from your thoughts. Analytical and critical thinking are two styles of thinking skills that are commonly used, but employed for different purposes.
Critical thinking is a guided, disciplined and systematic process, carried out by an individual to achieve reasoning and thought capacity at the highest level and to apply this knowledge in everyday life. Social work refers to activities aimed at helping members of society attain their potential and enjoy a fulfilling life. Social workers should try to include critical thinking in their jobs in order to have more success.
Critical thinking is the ability to identify flaws in an argument and solve problems through reasoning. When teaching children about critical thinking, you can prepare a variety of entertaining activities for them to do. Activities let the children have fun, while learning how to solve problems they may encounter in school and life.
Evaluation --- whether of an argument, a process or an individual's job performance --- is the product of judgment, interpretation and critical reasoning. An informed judgment is one that is objective and impartial. If you have the responsibility of conducting evaluations, use logic and empirical evidence to filter out bias, account for circumstances and correct fallacies.
Critical thinking is an approach to thinking in which a person visualizes an idea and then goes about the task of taking the steps necessary to reach a conclusion. It involves research, investigation, evaluation, conjecture and implementing. Having critical thinking ability is vital to many professions in today's age of information society. Utilizing the five-step process of critical thinking skills can eliminate much of the worry and anxiety of problem solving.
Critical thinking functions as a skill that you develop -- a way for you to look beyond the obvious and discover a deeper and more important meaning. Accepting things that you encounter at face value does not utilize your critical thinking skills. Instead, to improve critical thinking, invest the time and examine the information or situation from different angles whenever you hear or see something about which you have little or no knowledge.
Critical thinking skills help students succeed in college. The ability to think critically translates into good marks and, more importantly, into intellectual growth. Critical thinkers do not see the world in black and white, answer questions with a simple yes or no or accept things at face value. Thinking critically means questioning, not criticizing.
Dr. Richard Paul, an internationally renowned expert on critical thinking, has argued that critical thinking prevents ethical instruction from turning into indoctrination. People shouldn't be taught ethical conclusions, he argues, because teachers will often bring their own ethical prejudices to bear and will indoctrinate the students, rather than teaching them to understand ethics and think for themselves. Instead, people need to learn to think independently.
Critical thinking is a valuable skill for people of all ages. Sharp critical thinking skills can help children develop the reasoning and logic necessary to solve difficult problems and consider different perspectives. Assist your child in uncovering some of his intellectual possibilities by helping him think in a critical manner.
Critical thinking involves more than simply memorizing information. Thinkers must seek information and assess with ideas are valuable. Certain facts alone may aid in some forms of decision-making. For example, people know to fill their tires up with air when they seem low. However, people sometimes need to make decisions on problems that have no obvious answers. In these cases, critical thinking skills come in handy. Critical thinking is a process that uses logic to evaluate premises and evidence as objectively as possible.
Sociology is the study of the behavior, habits, interaction, and lives of groups of human beings and their societal structures. It takes critical thinking skills to study and understand people, and document the lives of organized groups. Using the four elements of information, questions, assumptions, and point of view, sociologists study humankind and their cultures.
Higher order thinking is a complex level of learning and comprehension of theories and facts that can be applied academically, personally and professionally. The ability to think critically and analytically about a problem requires higher order thinking skills. Benjamin Bloom's 1956 Taxonomy of Educational Objectives identified six levels of cognition, with knowledge being lowest and analysis, synthesis, and evaluation being highest. K-12 teachers create lesson plans and assignments to help students develop the critical cognitive skills that are required for success in academic and business life.
Defining critical thinking and classifying "levels" of critical thinking is a curious endeavor. Critical thinking in its purest sense grapples with the preoccupations of how we use our mind to approach the world around us. It involves such things as comprehension, evaluation, judgment, creativity, decision making, and problem solving. Critical thinking is meant to evolve and relies on logic and reason. Yet, a few conversations with different people will make it apparent that critical thinking is not the same for everyone and sometimes, the evolutionary process has abruptly halted -- leading some critical analysts to examine and conclude different levels…
Edward Glasor, an educational and psychological theorist, defined critical thinking in 1941. His definition consisted of three components. First, he argued, critical thinking involves a willingness to thoughtfully consider problems rather than reach compulsive conclusions. Second, critical thinking involves logic and mathematical reasoning. Finally, critical thinking involves skill that can increase over time. In addition, the process of critical thinking requires certain psychological dispositions that you can practice.
Abstract thinking is a broad, general way to think about and process an object or idea. Abstract thinking requires you to think on a grander scale far beyond the object itself. For example, if you think about a candy bar in an abstract way you might think about the person who invented that candy bar, what his life was like and what inspired him to make candy as opposed to thinking about the things such as particular ingredients or calorie count. Abstract thinking increases in a person's late teens and thus there are a few different ways you can teach…
The term "critical thinking" describes the application of learned material and sound logic to new material, while remaining sensitive to the context of the example. Young children use critical thinking while playing "eye-spy" while high-school students write term papers integrating multiple viewpoints and sources of information. Incorporating critical thinking activities into a homeschool curriculum isn't difficult, but it does require creativity and an awareness of everyday learning opportunities.
"Heuristic" comes from a Greek word meaning to find or discover. Heuristic learning is experiential learning, wherein students learn by experience or through discovery. Experiential learning includes trial and error, educated guessing and using a "rule of thumb" or an established rule to find the answer to a more complicated problem. Teachers use heuristic learning to encourage students to use common sense and rational methods to find answers to academic questions. However, it is also the teacher's responsibility to create a learning environment conducive to experiential learning with varied and age-appropriate educational lessons.
Faced with choices, people make decisions in a number of ways. Often they engage in critical thinking, carefully considering the options. Emotions, however, almost always plays a part in this process. By understanding how the two types of thinking are perhaps irrevocably entangled, people can more effectively make decisions.
Critical thinking is a purposeful, structured and disciplined mode of seeking out and processing information. It is important in research applications because it allows a researcher to identify, acquire and analyze the information necessary to resolve a research question. A lack of critical thinking skills leaves a scholar with a mountain of indecipherable information.
The spongy matter in our skulls -- the brain -- is a fascinating, complex and still mysterious organ. It is the source of our consciousness and the means by which we perceive sensory data, form memories and use logic and reasoning to figure out problems. Long-term memory and critical thinking are two important neurological functions.
Critical thinking is defined broadly as the ability to gather, evaluate and use information effectively. Thus, it is one of the most important skills to teach students from a very young age because of how useful and necessary it is throughout a student's educational career. By teaching critical thinking skills to children in the second grade, elementary school teachers help ensure that young students build on and continue to use these skills throughout their elementary, high school and college educations.
Verbal communication skills tops the list of "soft" job skills sought by employers, according to the National Association of Colleges and Employers Job Outlook 2011 survey. Strong communication skills are important, and there is always room for improvement. By learning what causes poor communication you can be more aware of how to successfully communicate with others.
You can do several different activities in the classroom to build vocabulary and critical thinking. Most teachers find that implementing the lesson is the easy part, but it is creating the lesson that presents the most challenges. Building vocabulary doesn't have to be achieved only through reading or vocabulary quizzes. You can create a lesson around different vocabulary games and activities, such as crossword puzzles and word walls. Integrating critical thinking skills will keep your students stimulated and provide them with the foundation to continue learning and achieving.
Critical thinking is the process of gathering information, evaluating it for truth and making a determination that is based on the information you took in. It is the cognitive search for truth in the events that surrounds your life. Prejudices can prevent you from entering the critical thinking process or prevent you from reaching a fair conclusion. While biases are natural, you are responsible for seeing that they do not affect your critical thinking.
If you're teaching any theory, you'll need to strike a careful balance between abstract and concrete methods. Some students learn best with concrete, real-world examples. These people understand ideas and theories best when they can relate them to objects and situations they recognize. On the other end of the spectrum, some students learn best by analyzing raw data or thinking about problems more conceptually. If you are planning on teaching a class, you may find it helpful to learn about abstract and concrete teaching methods.
The education field uses critical thinking skills to help develop thoughtful minds. To help reach this goal, teachers can use visual arts to expand mental skills students need for academic, personal and social development. Teaching critical thinking in education enables students to use logic and reasoning to solve problems. It is a purposeful, cognitive process. When students engage in critical thinking, they see connections among topics, concepts and disciplines.
Critical thinking is the process of interpreting, understanding and evaluating. You use critical thinking when you encounter any event with a purpose or meaning that's not explicit, such as a phrase, occurrence or image. The purpose of critical thinking is for you to evaluate data and understand why it occurred and what it meant to you and to others. Various personal challenges interfere with your ability to think critically.
When the children at your school need additional assistance with their math skills, it can be difficult to find time during the academic day to add more instruction without taking away from other classes. While learning math is important, so are the other classes on their schedules. Adding remedial math classes before or after school, or on the weekends, is possible, but it can be expensive.
Critical thinking skills are an important tool, especially when it comes to personal beliefs and academics. When applied, critical thinking is a powerful defense against ideas and opinions that are potentially harmful or blatantly wrong. Unfortunately, not everyone possesses this ability, although it can be taught. Understanding what suppresses critical thinking is an important step to obtaining a more open mind.
Critical thinking involves the use of experience, reasoning, common sense and intuition to make informed decisions. Good critical thinking skill sets are curiosity, thinking through and analyzing issues, exploring the Internet and other media for more information, examining and incorporating new ideas and assessing what has been read and heard. Bloom's Taxonomy describes six stages of critical thinking in 1956 as Knowledge, Comprehension, Application, Analysis, Synthesis and Evaluation.
When making decisions, emotions will often overtake reason and dictate your responses. Increasing your ability to make rational, reasoned decisions does not involve suppressing emotion, but acknowledging its presence and understanding its impact. Altering the way the mind works through problems and how it reaches decisions is a huge and highly personal undertaking, but there are a number of ways in which you can start refining your rational skills.
Boardmaker provides the ability to include visual cues with words to help students with disabilities who are unable to read, communicate and participate in other activities. Using Boardmaker, an educator has the power to make a large variety of supports to assist students who would otherwise be unable to participate in activities. Doing so allows them to be more independent.
The first few weeks of kindergarten may be challenging as teachers evaluate the skills of individual students. In the average classroom, many students may have basic computer skills while others may have never used a computer at all. Because even in kindergarten computers enhance the classroom learning environment when used properly, it is wise for teachers to expose every student to the available technology, and one way to do so is with games.
Critical thinking is the process of taking in information, then conceptualize or analyze the information in a critical way. Developed critical thinking skills are important for all teaching positions, including mathematics. Since math is almost entirely based in the use of concepts, critical thinking is necessary both to fully understand and successfully explain most mathematical operations.
Employing critical thinking in your communication with others is a process that can promise superior problem-solving skills. It can also be a way to create a relationship in which healthy skepticism is welcomed. Problem solving comes from seeing an issue from more than one angle. True collaboration occurs when partners exchange ideas and questions freely. Without employing critical thinking, possible solutions can be tainted by individual bias.
Literature programs help students in primary schools to develop literacy. Some elementary school teachers have taken to moving beyond basal reading methods to providing more in-depth literature programs to their students. Among the benefits of these programs are encouraging recreational reading, development of problem-solving skills, increased cultural understanding and development of independent learning.
A person with good intuitive skills is able to provide good insights and quick answers in many life circumstances without having to spend much time reflecting or debating on the right course of action. Because these talents seems to come naturally to some people, it can seem as if good intuition is something inborn. That might be partly true, but that doesn't mean you can't train up your intuitive skills with a little determination and practice.
As the Internet has become an increasingly important tool in the workplace, online work has been appearing in schools, as well, which are dedicated to preparing kids for their future careers. Projects involve the use of the Internet are commonly assigned to students. Such online schoolwork requires kids to use special thinking skills.
Mentoring new teachers is a rewarding and invigorating experience. Mentors generally provide support and guidance to new teachers as well as information and/or suggestions to help a specific student or to help teach content. Not only can mentors stave off the new teacher's frustrations, they also can be the catalyst for a rewarding first year of teaching and succcessful career. While new teachers gain from mentoring programs, so do veteran teachers. Veteran teachers keep up-to-date on new teaching strategies and renew their own enthusiasm for the job.
Benjamin Bloom and a team of psychologists came up with what has become known as "Bloom's Taxonomy" of thinking skills -- knowledge, comprehension, application, analysis, synthesis and evaluation -- in 1956. Richard C. Overbaugh and Lyn Schultz of Old Dominion University report that the terms have been changed from nouns to verbs and updated for the 21st century: remembering, understanding, applying, analyzing, evaluating and creating. The top two levels have also been interchanged. Lesson assignments aimed at the remembering, understanding and applying levels require lower level thinking skills than those in the analyzing, evaluating and creating categories. Incorporating critical thinking…
Critical thinking skills assist nurses with assessing and interpreting information and observations and making decisions based on reading, observation and clinical practice. Nursing students benefit from critical thinking exercises for developing diagnostic and patient care skills. Critical thinking also enhances communication skills. Critical thinking exercises for nursing courses can be implemented in lecture, lab and clinical settings. Incorporating critical thinking in assignments and in class develops nursing student's analytical skills used in day-to-day patient care.
Free will and determinism are the two fundamentally opposite answers to the question whether man can make choices independent of any external influences. Understanding the difference between these two philosophical positions requires knowledge of what causal chains are, as well as what constitutes uninfluenced personal decisions. This debate is based on the highly important philosophical problem of the existence of freedom of choice or predetermination of all events.
Critical thinking is an active, balanced approach to research and analysis. Rather than holding tight to a belief just because it's what they've always believed, critical thinkers evaluate all sides and allow new ideas to influence them when appropriate. The ability to think critically is essential, especially with regard to learning.
Thinking skills are a vital component for success at school. Often, logical thinking is described as a skill that is used by older children. Yet, you can teach children in kindergarten thinking skills and this helps to build a solid foundation for thinking in school. Using questions that have been prepared can make the activity fun and engaging while at the same time teaching children to examine facts and learn how to think instead of what to think.
First-graders' verbal skills rapidly expand to include increasingly complex sentence structure. Teachers use this growth, in conjunction with students' emerging reading skills, as a building block to introduce writing. They capitalize on their students' natural curiosity about the world to show them how writing is used to communicate facts, ideas and fantasies and to process personal growth and development.
Critical thinking is an intellectually disciplined process of analyzing and synthesizing information as a guide to formulating beliefs and actions. Every day you apply information that you have tested and found to be valid. The two primary components of critical thinking are a set of skills to process and generate information and the rigorous habit of applying those skills to your life and work.
According to the Foundation for Critical Thinking (see Reference 1), critical thinking involves raising vital questions and problems, gathering and assessing relevant information and formulating well-reasoned conclusions. Further, "critical thinkers routinely apply the intellectual standards to the elements of reasoning in order to develop intellectual traits." When choosing topics for critical thinking essays or theses, choose well-researched topics from credible references.
Science activities requiring inductive thinking approaches learning in an exploratory manner, requiring students to generate conclusions based on observations. Students analyze information, identify patterns, make generalizations and explore hypotheses to draw conclusions. Activities providing this exploratory opportunity for students to learn requires students' evaluation of the generalizations made through comparisons and discussions of concepts related to the activity. Research shows this type of thinking assists in deeper processing of information, leading to long-term memory retention.
Critical thinking is the process of analyzing and evaluating collected information in a process of reasoned judgment to form a determination or idea. The information gathered to form this determination is developed from such things as evidence, reason, judgment and fairness. Critical thinking is also important when making decisions, settling disputes or weighing the differences among a choice of options.
Your high school years are supposed to be the best years of your life. While that may be arguable, they certainly contribute significantly to your life's direction. Most secondary school programs finish at grade 12, but a 13th grade offers students additional support that serves them well when they pursue post-secondary education.
Thinking skill is the ability to think about the environment and situations in an intelligent manner. The more complex situations your mind can comprehend, the higher your thinking skill level is. Thinking skills are typically built throughout your life, beginning in primary school.
Critical reading is the search for underlying meaning in a piece of text. After pre-reading, you re-read the text to identify elements like language usage, assumptions and information. To read something critically, you are not only looking at the facts presented in the text, but also gathering an interpretation of what the text means. Critical reading skills enable you to understand and identify bias and tone. By following a few steps, you can develop critical reading skills.
The late 19th century American philosopher Charles Peirce developed a sophisticated model for critical thinking. Peirce was the founder of the tradition of American philosophy called Pragmatism. According to Pragmatism, all thought is contextual. People's thoughts and beliefs help them to make sense of the world. When the context changes or your beliefs become problematic, you are compelled to "fix your beliefs." This is done through opening the road to inquiry. The barriers to critical thinking, in Peirce's terms, are anything that blocks the road to inquiry.
According to the National Council for Excellence in Critical Thinking, "Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action." Schools on every level of the academic spectrum utilize a variety of tools to assist in the development of critical thinking skills. Information Literacy training integrated into the core curriculum and taught by a school's Librarian or Media Specialist is one of the most effective tools that a school can utilize to assist in…
Assessments evaluate a situation, product or service and provide valuable feedback so you can make improvements. It also helps those conducting the assessment process to gather better insight into how something works or how people perceive something, often choosing a variety of ways to get the best results. The education and consumer fields are two of the most popular places to find the assessment process.
PE classes are used for more than advocating good physical health. Use critical thinking games to help improve the minds of your PE students. The benefit of using critical thinking games in PE is that they often promote better physical health while simultaneously helping students gain a better understanding of problem solving.
It is important for any business leader to set goals for where he wants his business to be in the future. Strategic thinking will help you meet these goals through a process of developing your skills in creative problem solving and teamwork, as well as your critical thinking skills. To be a strategic thinker, you must be able to see the end result of your vision, and work backward from that point to where you are at the present; then build the correct road map to move forward.
Judgment is needed for any job. The ability to make a sound decision based on the facts and implement a plan can make the difference between failure and success. Assessing the strength of your judgment skills and those of others can help you learn to improve your skills.
Higher level thinking skills in fifth-grade math in order include: expressing mixed numbers with fractions or decimals; adding, subtracting or dividing two and three digit numbers; and creating equations and inequalities from available information. Fifth-grade students learn to interpret or create graphs to indirectly predict information. Your state department of education publishes performance standards for fifth-grade mathematics, and the Educational Testing Service certifies teachers' proficiency and publishes sample elementary mathematics questions.
Critical thinking is one of the most important skills in language arts. Without critical thinking skills, students cannot analyze written texts. Without the ability to analyze written texts, students cannot critique works of literature. Because language arts is so heavily based on analysis, it is crucial that language arts students actively practice their critical thinking skills.
Critical thinking is the ability to apply criteria and to judge. B.K. Byer argues that critical thinking is essential to democracy because a successful democracy assumes a citizenry that is capable of assessing arguments and reaching reasoned decisions upon which to vote. Effective critical thinkers are fair-minded, logical and able to consistently apply criteria to what they are judging. The various techniques for improving critical thinking all require practice.
The upper elementary grades are a time when students move from learning how to read, to reading in order to understand a concept, topic or idea. Students advance from being spoon-fed information to utilizing critical-thinking skills. Moving from simple thinking to being able to process information through observation and experience are skills required of critical thinking. These skills can easily be encouraged in the fourth-grade classroom.
Primary sources are visual, written or recorded resources created by an observer or participant of an event or period of history. Nonfictional primary sources may be considered eyewitness accounts of history, while fictional primary sources may be considered artistic reflections or interpretations of history. According to the Library of Congress, "examining primary sources gives students a powerful sense of history" and may increase student motivation in a class. World literature teachers can use primary sources to engage students in discussions related to the real world about which the literature was written. Further, many primary sources are available through the Internet,…
Critical thinking is an active interpretation of information and situations in order to come to a thoughtful and intelligent conclusion based on reason and evidence. Academics use critical thinking skills to evaluate their research in order to formulate an original thesis that furthers understanding about their subject matter. In short, without critical thinking skills, it would very difficult to be an academic.
If you can manage to improve your critical reading skills, this is one of the most important things you can do to further your academic success. You may love reading for pleasure. Though this shares some obvious common ground with academic reading, there are clear differences. You need to understand these differences as a preliminary step to improving your critical skills. When you read for pleasure, you generally relax. You may get a strong impression of the book and form your own viewpoint. However, academic reading requires you to pay more attention to detail and to remain more objective. Academic,…
Deborah Knot, the director of the Writing Centre at the University of Toronto, indicates that the best technique to use in the study of critical writing is critical reading. Being a critical reader is a necessary skill, since the majority of writing done will be related to the analysis of other writing. In order to use other sources in a critical argument, the aspiring writer must become a proficient critical reader.
Public schools often debate whether or not critical thinking skills should be taught and to what extent these skills should be taught. Homeschooling parents, on the other hand, get to decide to what extent students should practice critical thinking skills. Critical thinking skills are those that teach students how to learn so they can be autonomous learners.
Critical thinking and writing are skills that can be applied to help clarify and understand difficult and complex issues. The four main steps in critical thinking fit together like a puzzle, each helping to articulate the purpose of the others. The benefit to critical thinking and writing are that numerous aspects of an issue can be considered before a logical conclusion is made. This allows for minimal points of vulnerability when the conclusion is presented.
Humans interpret data in many ways. We use images, color, structure, sounds, smells, tastes, touch, spatial awareness, emotion and language to learn and remember facts, figures, faces, places -- in short, everything we know. Mark Wheeler, research scientist at the University of Pittsburgh, says of memorization, "I don't think there are any tricks, any way to really remember something without putting effort into it." But, he says, "There are effective strategies once you are willing to put in the effort."
Some students and teachers may be satisfied to teach and learn at a level that is just enough to get by. Training students to think critically is a slow and laborious process. Students often have difficulty accepting teaching to a higher cognitive level because they may be accustomed to passive learning and do not want to exert the intellectual effort to stretch themselves mentally. Despite the difficulties, it is advantageous to promote critical and creative thinking, and it is becoming more prevalent in classrooms.
Preschool children are full of wonder each day. Their ability to think and even make age-appropriate decisions can be enhanced with the guidance of parents and teachers. Observation at this age is full of discovery and all it takes is someone who is willing to rediscover the things in life that are often taken for granted as we age. Pointing out a rainbow, a bug crawling on the ground or a blade of grass and talking about it is a great source of thinking enhancement.
Critical thinking is the process of using data, logic and beliefs to guide behavior. Reasoning must be objective and free from emotion and biases whenever possible. Logical thought involves rationally approaching problems and questions to solve them in a fair manner.
Students who have developed critical-thinking skills will succeed when asked critical-thinking questions, while students who do not understand critical thinking will fail. Therefore, teachers must not only ask their students to think critically, but must also guide them through effective critical-thinking strategies. To do this, teachers can use a variety of games and assignments that encourage critical thinking.
To think critically is to question the assumptions underlying arguments and propositions. It is a valuable skill to learn and has numerous real-world applications. The ability to think critically is a desirable trait for many, including prospective lawyers, engineers and accountants. The teaching of critical thinking requires the establishment of clear objectives. These objectives provide students with benchmarks to evaluate their progress. Important objectives in the teaching of critical thinking include mastering logical fallacies, such as fallacies of relevance, of ambiguity and of presumption, as well as acquiring the ability to consider issues from multiple perspectives.
Deciding between an MBA or MS in a specialized area of business involves considering your background and ambitions, as well as the time and financial commitment involved for either option. Although their approaches to instruction differ, either type of degree or a combination of both can help pave a path for the same career in the long run.
In order to teach students to read critically, you must encourage them to think in that manner. It requires the retraining of a mind that has, most likely, in the years before they took your class, regurgitated facts instead of thinking analytically. As the York University website suggests, one way of breaking this mindset is to encourage students to use a pencil, as opposed to a highlighter, while reading the material. Teach your students that highlighting parts of the text is well-suited for the purpose of rote memorization of facts, while using a pencil to actually comment on the material…
Critical thinking means the complex processes involved in analyzing, synthesizing and evaluating information. This is true for elementary children as well as middle school, high school and college students, as well as adults. Critical thinking activities for elementary children can range from asking the right questions during discussions, to having children evaluate their own progress, and everything in between.
Although no single definition of critical thinking exists, most experts would agree that it involves the process of constructing sound judgments based on logic, according to the University of Minnesota. Frequently, educators use Bloom's Taxonomy to classify levels of critical thinking. Mary Forehand of the University of Georgia explains that the Revised Bloom's Taxonomy organizes a hierarchy of thinking skills. Metacognitive skill -- thinking about thinking -- can expand the concept of critical thinking.
How well a student learns depends not only on his inherent interest in the subject, but also on how motivated he is to learn. Most students will agree that a teacher plays a major role in shaping the learning experience. Considering this, it is important for teachers to develop skills that motivate students to learn and perform better. There is no single approach a teacher can use; she will need to try different methods to find ones that suit particular students.
Professionals use critical thinking in many subjects, such as social sciences, philosophy, law and business. Many professors teaching these subjects use case studies to develop students' critical thinking skills. To write a case overview for a critical thinking project, you must acquire all the relevant case information and identify the key problems in the case. You must also identify how you will use critical thinking skills to solve the key problems presented in the case study.
Critical thinking is a skill that is not necessarily dependent on intelligence or education, but it can be used and applied by anyone with an open and objective frame of mind. However, there are some barriers of a personal and cultural nature that prevent people from using their critical thinking skills to their full extent.
If you are a teacher, you may find that you spend a great deal of your time teaching children the limited set of skills they need to pass the next exam, rather than actually teaching them to think for themselves. While it may seem that there isn't enough time in the schedule to teach such abstract concepts, it is fairly easy to work interactive thinking into the learning process.
Assessment in the science classroom is important to measure student growth and understanding. Science is a very involved area of study. For this reason, a variety of assessment tools are often utilized to assess the student's progress. Educators may choose to use criterion-referenced tests, lab practicals, notebooks or experiment and lab write-ups to assess student understanding of scientific content and processes.
According to a statement made at the eighth annual International Conference on Critical Thinking and Education Reform in 1987, critical thinking is defined as the "intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from ... observation, experience, reflection, reasoning, or communication, as a guide to belief and action." In simpler terms, critical thinking is taking the knowledge you've learned and applying it to the situation at hand. College professors often give assignments in an effort to develop these critical thinking skills in students.
Assessment of critical thinking skills determines what students learn and how they learn. It also helps educators become better teachers. Measuring these skills tells teachers how students judge and analyze situations and how they make decisions. Applying several assessment techniques gives teachers a broad sense of the skills learned during the critical thinking process.
Critical thinking skills demand that students move beyond the literal level. On Bloom's Taxonomy, which classifies the hierarchy of learning, critical thinking encompasses the higher order skills such as analyzing, synthesis and evaluation. All of these levels ask students to do more than remember information. In a diverse population, developing critical thinking skills can present a challenge for teachers; however, there are several methods to encourage this skill despite differences among the students.
Critical thinking is an essential skill for any serious economics student. Economic data cannot be analysed without some understanding of logic and human behavior, so critical thinking is an essential part of understanding the principles of economics. The principles of economics fall into three broad categories: how people make decisions, how people interact and how the economy as a whole behaves. The critical thinking skills that apply to economics, therefore, are those that relate to human and organizational behaviors.
Secondary school in the United States is generally defined as high school. High schools have a number of goals that they need to focus on from the district, school and classroom levels in order to ensure that students are getting a quality education. In the end, all other goals are supposed to support the primary goal, which is to educate students.
Many mathematics games encourage critical thinking. Mathematics provides a structure to accurately measure things. Measuring things gives you a sound basis for analysis, whether you are making a decision about a game strategy or assessing a question about social policy. Analysis and assessment are the essence of critical thinking. Critical thinking depends on mathematics because inaccurate first impressions are often corrected by simple mathematical formulas.
Tangible writing tools like pens, paper, computers or other electronic devices are certainly instrumental, literally, in the process of critical writing. But a number of intangible writing tools fit the broader definition and are just as important. Without these abstract tools, the critical writing process will likely be mediocre at best. The critical part of critical writing refers to breaking things down, figuratively, into component parts for close examination and analysis.
Pearson College is a rather unconventional educational institution. It is a two-year, pre-university college located on the Canadian coast, and its student body is composed of 200 students from all over the world who are pursuing an International Baccalaureate certificate. Among the many skills that students can develop are critical thinking skills, which have immense value both within and beyond the academic realm. Critical thinking encourages analytical processing of information, outside-the-box cognition, and logical reasoning. While there is no course at the college explicitly dedicated to "critical thinking skills," different courses can help develop these skills.
Critical thinking means several things, including focused thinking in a specific area, actually thinking about thinking to improve thinking skills and intellectual skill mastery on a specific topic. Nursing students need to develop critical thinking skills so they can gather information, concentrate, remember, organize, analyze, integrate and summarize. As the difficulty of cases increases, the process of critical thinking intensifies through the knowledge base and practical experience of the nurse. Nurses need to fully engage their mind as they treat patients and make decisions related to safe, healthy client care.
Critical thinking is a skill imperative to higher learning. The ability to take in information and assess it is one critical thinking skill. Being able to integrate more than one idea into a cohesive whole is another. Critical thinking is important for problem solving and makes it possible for a learner to use information constructively.
People who acquire types of critical thinking skills generally want to improve their thought processes in some form or fashion. Critical thinking is often regarded as a pathway to discovery toward greater self-awareness, for example. You can develop critical thinking skills by first examining how it is you interpret the world already.
Educators are always looking for ways to promote critical thinking in their pupils However, critical thinking isn't just for students. Parents, doctors, engineers, teachers -- everyone can benefit from a flexible line of thinking. Critical thinking involves taking previously known facts and expanding, connecting and challenging them. Even if you are not in a classroom, you can practice these skills practically anywhere.
Critical thinking is a vital skill as students enter the middle grades. To ensure students retain what they learn, teachers must avoid what CriticalThinking.org calls “mother robin teaching”: processing the content for the students and feeding it to them. To prevent students’ perceiving school knowledge as “something independent of thought,” they must be given the tools and made to reason things out. A number of websites provide resources for teaching them these skills.
People use critical thinking skills in everyday life to solve problems, make choices and gain perspective on events. They learn these skills in formal education and in the normal course of life. The key ideas for critical thinking skills involve the capacity to consider and evaluate all angles of an issue -- positive and negative -- before deciding on an answer or a resolution.
Solving business problems requires critical -- and creative -- thinking. Drawing on Bloom's Taxonomy, the higher-level thinking skills of analyzing, synthesizing and evaluating are essential for any situation that requires more than simple recall and application. The challenge of writing a case for using critical thinking skills for solving a business problem is grounded in identifying the problem and suggesting a course of action to resolving the issue.
A master's degree in business administration (MBA) is designed to provide you with the knowledge, skills and expertise necessary to become an effective business leader with a specialization in an area of your choice. Earning an MBA may increase your salary, boost your career and strengthen your leadership skills. Students in the MBA program have varying educational backgrounds and work experience.
A key skill that can help children in their academic, social and emotional lives is the ability to think outside the box. You can help encourage this skill from an early age through a variety of activities and games. If you routinely read with your children you can use this reading time to create activities that nurture and encourage their critical thinking abilities.
Critical thinking skills can help improve your performance in essentially every area of life, from your personal life to your relationships to your job, your hobbies and other activities. This is because critical thinking underscores other skills by encouraging you to think about your decisions, opinions and general outlook. Critical thinking results in better decisions, which in turn makes you more competent in every area.
According to educator and educational psychologist Linda Elder, critical thinking skills develop early and continue to develop over the course of a person's lifetime. Though the ability to think critically manifests itself differently in primary grade students, several basic assumptions can help to assess those abilities as they exist in young children.
In today's classroom, it's important to teach students to become critical thinkers. Engaging in higher order thinking exercises enables students to go beyond the simple memorization and regurgitation of facts. Developing these thinking skills helps students understand, infer, evaluate and apply information to solve problems both in and out of the classroom. Teachers can encourage the development of these thinking skills by incorporating thinking activities in the classroom.
Teach young elementary students to use critical thinking skills by asking purposeful questions. Critical thinking involves analyzing, comparing and synthesizing information. Develop activities that will foster the growth of this kind of thinking and allow students to go beyond simple recall questions. When we teach children how to think, we give them the most essential tool needed for a lifetime of learning.
In 2005 Bruce T. Lahn of the University of Chicago released a study revealing that certain genes present in the brain had multiplied, possibly aiding brain function. A human being's ability to evaluate the world around the her in a critical light is inextricably linked to other cognitive processes. Science's understanding of the human brain has only scratched the surface of critical-thinking possibilities. A key to how the human mind responds to its immediate surroundings comes from how the brain has evolved throughout time.
Graduates of a Masters in Business Administration degree are ready for employment as middle-level managers in any organization. However, many MBA programs require students to specialize in the second year of the program. The area of specialization does limit the career choices of a graduate, but also opens up new avenues of employment unavailable to graduates with other specializations. Four specializations in which the Bureau of Labor Statistics predicts job growth are public administration, finance, healthcare management and project management.
Thinking activities are important parts of school for children of any age. In kindergarten, group activities can help students to get to know each other and feel like they are part of the classroom. Activities that get the entire class to think together help children to learn the benefits of teamwork and creativity.
Critical thinking covers many aspects of thought including planning, reasoning, logic and reflection. Physical education class, or P.E., can incorporate all the aspects of critical thinking in many activities. It is the job of the teacher to make students think about the lessons they are being taught to utilize critical thinking skills.
"Reflections on Classroom Thinking Strategies," written by Eric Frangenheim, is designed by teachers, for teachers, according to Teacher Training Resource Bank Online. The reflections outlined within the book have been not only birthed within the classroom, but tested there, as well. The ideas illustrated within the book are intended to ignite a love for learning for both students and teachers, using one basic belief and three main strategies.
Thinking is natural. Critical thinking, however, is a more active mode of thinking where the thinker consciously separates facts from opinions and challenges assumptions. Developing good critical-thinking skills requires self-discipline and frequent practice inside and outside the classroom. Children, students and adults all should develop -- and use -- critical thinking skills in their daily lives. Using these skills makes an individual apt to make informed, reasoned decisions instead of emotionally-driven ones. Critical thinkers actively seek and evaluate information; they do not passively receive it. Using critical thinking skills improves your thinking and ability to communicate with others.
Nurses are called upon to make life-and-death decisions, and to do that, critical thinking has to be second nature. Bloom's Taxonomy identifies lower and higher level thinking skills. While the lower levels of knowledge, comprehension and application are foundational, it is the higher-level skills that are required to make decisions. According to Karen Owens, an associate lecturer, "Nurses have to be able to think critically all the time they are on duty because they are responsible for patient safety."
While academic skills are important at all ages, they are not the most important component of formal education. General thinking skills are just as important, as these are the skills that set a firm foundation for further learning. You can encourage and develop these higher-level thinking skills through a wide variety of games and activities that encourage kindergarteners to think for themselves rather than just passively absorb information.
The nursing profession has evolved from nurses assisting physicians to becoming independent practitioners of nursing care. As a result, nurses no longer rely solely on physicians' instructions to do their jobs. Nurses make independent clinical decisions within their scope of practice to provide the most appropriate care for patients in their charge. This requires good critical thinking skills and abilities. Nursing schools have become increasingly concerned with instilling not just information but also critical thinking and reasoning in their students. They have several tools at their disposal.
Critical thinking is the process of deducing a certain fact or piece of information about a problem or situation, either hypothetical or real, and deciding what you must, or should, think or do. Critical thinking can apply to reading literature, completing math problems, or studying history, as examples. It deals with gathering evidence through observation so that you can make informed judgments. Critical thinking is useful in all walks of life, so it is never a bad idea to improve your critical thinking skills.
Critical thinking skills help people of all ages shape their own opinions based on facts and data presented through research, news stories and experiences. Learn critical thinking skills using news articles taken from research journals, newspapers or textbooks, and analyze the information that is presented in the articles to form your own opinion and arguments that are in favor or against the topic.
Critical thinking skills are distinctly different from your ability to recall facts. Recalling information is one thing, but critical thinking skills describe the ability to actually synthesize and process information. More than just knowing facts offhand, a critical thinker will be able to look at facts and understand their importance. This is a vital skill as we are bombarded by information every day and it is up to us to function as our own "filter," using critical thinking skills to assess, process and analyze the facts that are thrown at us.
Thinking critically means understanding others' viewpoints, recognizing biases and then forming an opinion. Critical thinking skills are essential for problem solving, constructing arguments, detecting mistakes in reasoning and reflecting on personal beliefs and values. Teachers can show students how to gather and analyze information from multiple sources, synthesize it with prior knowledge and consider it logically through classroom discussions and assignments. Even young children can learn to become critical thinkers if we let them think for themselves and ask carefully worded questions.
Mortimer Adler wrote about four different types of reading in his classic "How to Read a Book." The two lower order types of reading he called "elementary reading" and "inspectional reading," in which the reader is merely reading the words on a page and perhaps gaining some specific information from the author. Adler then wrote about two types of reading that require higher order thinking skills, namely analytical and syntoptical reading. These types of reading involve independent critical thinking. Readers can learn how to practice critical reading by implementing a few thinking skills.
A Ph.D. requires commitment, passion for the subject matter and a lot of perseverance and patience. Ph.D. students must also have a number of critical thinking skills in order to successfully finish the degree. The critical thinking skills that are refined and fine tuned by Ph.D. students, while working on the dissertation, make them attractive candidates in the professional and non-academic worlds.
Definitions of what constitutes "critical thinking" vary. Generally, this term refers to a process of thinking that is organized, is logical and leads to a conclusion or valid opinion about a given topic. Teachers who help students develop and improve critical thinking skills are interested in creating active learners who are able to engage in topics, make sound judgments and become a consumers of knowledge.
Characterized as the conceptualization, analysis and evaluation of information, critical thinking skills are crucial for teenagers to learn and use actively. Teenagers need critical thinking skills to compete for top colleges and Universities, and their future professors will expect them to have a constant and thorough usage of such skills. Critical thinking is also necessary for teenagers to use to stand up to peer pressure.
Acquiring the skills for critical thinking takes time but is crucial for college students to develop these skills to succeed in their education and in later careers. Critical thinking refers to effective thinking, to understanding and evaluating material, and reasoned and logical decisions. Critical thinking is built upon four related skills that, when used together, provide students with the ability to solve any problem.
Critical thinking skills are an important part of our knowledge base that we are never too old to practice or add to. Critical thinking begins when we possess enough information to form coherent thoughts; it continues throughout our entire life. Our level of critical thinking skills determines how well we make decisions. The more we practice critical thinking, the better equipped we are to make intelligent decisions.
Free will is one of the oldest concepts in philosophy. It has been debated since the beginning of philosophy as a subject, and almost every major philosopher has contributed to the debate. You do not have to be an expert or even a student of philosophy to understand the basics of free will. Simply by thinking through the issues involved and discussing them with someone you are grappling with a major philosophical issue. There are no right or wrong answers, although by understanding the principal issues you can gain a greater appreciation of the subject.
The ability to think critically and solve problems is useful in many areas of work and study. These skills can be taught and learned separately, although critical thinking is often useful in problem solving. Thinking critically means considering things objectively, including the pros and cons or problems and benefits of a topic or situation. Thinking critically enables you to take a balanced look at a topic which is useful for essays but also for making decisions in real life. Problem solving skills are useful in many situations such as, working out math equations, working out the best route to get…
Educational learning objectives are not the same thing as learning goals. Goals are broad categorizations of things students will learn, whereas learning objectives state specific, measurable tasks that students will be able to perform upon completion of a lesson. Write the objectives before designing the lesson plan and then tailor each step of the lesson toward fulfilling an objective.
Not all thinking is equal in scope; thinking skills grow and improve with age. According to Piaget, while young children learn through adapting and accommodation through experiences, logic is seldom seen before age seven, and formal thinking skills that deal with abstract and hypothetical elements are seldom seen before age 11. When discussions regarding thinking skills arise, "Bloom's Taxonomy of Cognitive Domains" is usually the basis of the discussion. This pyramid model, which was revised by Anderson and Krathwohl in 2001, attempts to explain the progression of thinking skills. On this revised pyramid, the base is "remembering" or the use…
Nurse aides play a fundamental role in extending patient care, whether in hospitals or long-term care facilities. The quality of patient care that nurse aides provide improves when ongoing learning opportunities are effective, and equipping nurse aides to approach their jobs with critical-thinking skills and hands-on practical knowledge requires a low investment, but can yield maximum impact in your health care environment.
An essential part of schooling for children and young adults is the development of critical thinking skills. Without the ability to examine and analyze data from the world around them, students do not gain the skills to ask key questions and evaluate the answers. These skills prepare children and young people (and adult learners, of course) to formulate and express their ideas and to communicate them to others. The ability to think critically enables an open mind and a balanced view, allowing the individual to function well in the workplace, at home and throughout society as she matures.
The modern test-driven curriculum offers little opportunity for students to practice creative problem solving or critical thinking skills. Critical thinking problems often have no single correct answer. Critical thinking skills are also known as higher-ordered thinking skills, and include such tasks as application of information, analysis of facts, synthesis of data, and evaluation of situations. Higher-ordered thinking skills are usually associated with the middle school and high school years, but critical thinking can and should be encouraged with younger children, particularly those who are intellectually gifted.
According to the Journal of Athletic Training, "critical thinking" is variously defined as purposeful and systematic thought, reasoned and self-regulatory judgment and the skill to engage in an activity with "reflective skepticism." In other words, avoiding oversimplified and emotionally driven actions is a necessary life skill if you don't want to become a passive receptacle for information. It is crucial for educators to promote the development and exercise of critical-thinking skills in the classroom for students at an early age.
Critical thinking is the ability to question, examine, analyze and recognize assumptions, values and conclusions. The ability to be a critical thinker is an important skill for students to develop as early as possible. It is a skill valued by employers in all fields and it is also what usually sets one student or prospective employee apart from another. Learning critical thinking is a process and while there are elements that can be taught, acquiring the skill takes time and effort.
Teaching critical thinking skills is as important and valuable as teaching facts via rote memorization. Critical thinking enables students to know how to best utilize and evaluate the knowledge they possess in order to see multiple points of view and determine multiple possible problem solutions. By engaging your kindergarten students in seemingly simple analytical activities, such as comparing and contrasting objects and story character role reversal, you will be strengthening their evaluative abilities.
According to the National Institute of Health, nursing education focuses on transferring the knowledge of expert nurse educators to aspiring student nurses. Both traditional and nontraditional approaches to instruction exist, such as classroom to teaching to hands-simulation exercises.
Critical thinking is critical. It involves looking at claims or beliefs and asking whether or not it is sensible to accept them, and if we accept them, on what conditions. It is also critically important to start developing the habits of critical thought early.
Critical thinking as it pertains to reading is the ability to absorb and analyze a body of literature to determine its message, purpose and relevance to other people and events. Your effectiveness in reading critically is a measure of how well you are able to comprehend and understand what you are reading.
Though traditional math instruction focuses on teacher demonstrations followed by individual practice, math games engage students in active learning to convey math concepts. Math thinking games challenge students to go beyond lower-level thinking skills such as memorization or recall into higher-order thinking skills of evaluation and problem-solving. Critical-thinking games incorporate multiple mathematics skills and strategies so that students explore a range of possibilities before discovering the best path for solving a problem.
Critical thinking is important for everyone. We all use thinking processes constantly and we should be able to consider problems, reason and debate in a logical way. We need to know how to differentiate among quarreling, arguing, reporting, explaining and reasoning. Being able to put forward an argument that justifies our opinions is important. Demonstrating a knowledge of stereotypes, assumptions and biases shows an open mind.
As a math teacher, it's important that you teach students how to incorporate math into their critical thinking skills, because they will need to know how to solve difficult math problems in their future careers and when they have to manage their incomes and financial budgets at home. Math-related critical thinking skills can also challenge your students intellectually, and this allows them to sharpen their problem-solving skills.
Boardmaker is a software program that uses pictures and symbols for communication purposes. It is being widely used in classrooms today as a form of educational technology. Teachers who work with special education students, including autistic students, have found it to be a key learning tool to use with communication and affective disorders. Teachers use Boardmaker to create classroom materials that make use of visuals that greatly increase students' learning. It is also used to assist with behavior management in the classroom.
Beginning map skills in the classroom involve exploring different types of maps and having students work to create a map of their own that provides directions for others. While older students may focus on map symbols and scale representations, the goal of basic map skill lessons is often to expose students to the simple use of maps to help people find their way.
An assessment is a diagnostic process that measures an individual's behaviors, motivators, attitudes and competencies. Assessment tools comprise various instruments and procedures. These tools are largely used in educational institutions, nonprofit organizations and in the corporate world too. The success of designing and developing assessment tools is brought about by using scientific methods to do so.
Critical thinking refers to a conglomeration of skills and mental activities incorporating analysis and conceptualization. When you use your critical thinking skills, you're flexing numerous mental muscles at once, such as rationality, judgment, self-awareness, honesty and open-mindedness. Critical thinking has been called reasonable, reflective thinking focused on deciding what to believe and do. Testing your critical thinking skills allows you to evaluate how sharp your ability to reason, analyze and assess is.
Critical thinking is the practice of actively conceptualizing and analyzing information gleaned from observation, experience, reflection, communication or other sources. Critical thinking skills vary from person to person and develop over time. Evaluation of critical thinking largely hinges upon the critical thinking capacity of the evaluator. Clarity, relevance, completeness, depth, fairness and rationality are essential to critical thinking.
Math is older than reading and writing. It is older than the Earth and our Sun. Math is as ancient as the universe itself. But the development of tools by humans to measure, quantify and understand the physical world are relatively new.
Higher-level thinking activities are intended to engage students with a deeper understanding of classroom materials. Sometimes called critical thinking, students are able to interact more fully with the information by participating in creative activities that foster a more well-rounded experience of the topic or issues. Higher-level thinking activities are essential for school children as they teach students to find solutions independently and with greater depth of understanding.
"Foundations for Success," the final report of the National Mathematics Advisory Panel in 2008, identified slipping mathematical prowess as a serious educational and social concern. From the high mathematical standards of the 20th Century, America is losing ground and runs the risk of not being able to sustain a work force with adequate mathematical skills. Currently, technical mathematical talent is imported from abroad to address this gap. Assessing students mathematical abilities is important, as without the basic skills, they will have trouble studying algebra, the benchmark of higher mathematical success. There are a number of assessment tools to measure mathematical…
Pattern recognition, the ability to identify repeating symbols within a given context, is a popular measure in intelligence testing. Series questions often appear on IQ tests because a person's ability to recognize, repeat, and complete patterns is essential to understanding the world. Society itself consists of many patterns, so students must be able to recognize them on their own. Teaching pattern recognition early on in elementary school helps improve students' ability to adapt to changing patterns in the real world.
Tracking the activities within your business is an important, highly detailed and sometimes complex endeavor. However, without this process, many activities held within a business can become unsorted, inefficient and misunderstood. With the varying activities held within a business that need to be modeled, the most understandable and concise way to build a business model is to utilize the Functional Decomposition method. This scheme allows users to thoroughly document and build business activities in a clear, comprehensible manner that can be understood by all.
Critical thinking skills are imperative for young students and even adults to perform well academically and professionally. Critical thinking skills are usually separated into three categories: affective, cognitive strategies encompassing macro-abilities and cognitive strategies for micro-skills. These skills will help you to understand yourself and the people you interact with, and you will also be able to utilize information better.
Critical thinking skills help a person solve problems and offer creative solutions. Critical thinking also leads to reflective thinking, so each experience is an educational one. Even people who do not have strong critical thinking skills can improve those skills. It is important to increase critical thinking skills to look at problems in a logical way.
Critical thinking is a way of evaluating a subject. It helps people solve problems by generating a good number of well rounded ideas and critically deciding if they work or not.
Most people have personal positions on matters such as politics and religion, and try to defend their opinions reasonably. However, many students must learn how to apply these skills in their academic lives. Critical thinking enables students to develop independent opinions and conclusions about academic matters. Such qualities push a student toward academic success, because they are necessary for good academic research work. Educationists have identified certain skills that students can cultivate in order to develop academic critical thinking.
Well developed critical thinking skills help prepare students for life by showing them how to effectively use problem solving and rational thinking to make decisions. Critical thinkers seek precise information rather than quick information on important topics. Students with a lack of sharpened critical thinking skills could be dependent thinkers relying on immediate outside information for validation instead of relying on their own thought processes. Establish an enhanced system of critical thinking by training your students with activities that sharpen their skills.
Abstract thinking is a high-level thought process. Someone who is thinking abstractly is considering a concept in a broad, general and non-specific way. Abstract thinking is the opposite of concrete thinking.
Critical thinking involves effective communication and problem-solving, and requires constant analyzing, reasoning and evaluating. This type of thinking is said to be the best way to get to the truth, according to popular psychologist and author, James J. Messina, Ph.D. Developing critical thinking skills takes time and effort. However, you can train yourself to think critically by learning and practicing daily the habits and processes of critical thinkers.
Mystery-themed games can add a touch of excitement to the classroom. These games work well for a variety of subjects, and students of all ages will enjoy using their brainpower to solve a mystery. Mystery games challenge students to use context clues to find a solution, and the critical-thinking skills they learn will prove useful for years.
Critical thinking skills allow you to more effectively explore the information that you acquire through learning and apply this information in your daily life. Individuals who are good critical thinkers question newly acquired information as a matter of course. By working to develop your critical thinking skills, you can make the information you learn more relevant and allow it to impact your thoughts, ideas and actions.
Cognitive ability testing helps determine a child's level of academic success. The most common test used, the Cognitive Abilities Test (CogAT), measures a chlid's aptitude in verbal, nonverbal, and quantitative reasoning. Reasoning skills directly affect problem solving and decision making, thus impacting academic achievement. The CogAT identifies a child's underdeveloped reasoning skills, allowing improvement to be made, both in and out of the classroom.
The development of critical-thinking skills is important in children and adults. Deductive reasoning is a subset of critical thinking. Critical thinking can also be called evaluative thinking, according to the American Scientific Affiliation. It is the ability to solve problems and draw conclusions with limited data. Deductive reasoning is the part of critical thinking that allows for problem solving.
Boardmaker is a software program used by teachers to create colorful graphic displays. The program contains thousands of communication symbols in color as well as black and white. Use the program to facilitate communication with non-verbal students and to develop reading and writing activities for all students. Parents and teachers use Boardmaker to assist in communicating with children with special needs and those who are autistic.
Assessments of Applied Mathematics measure your ability to use mathematical concepts and calculations along with critical thinking skills to solve real-world problems. These assessments may help to determine your academic needs or progress, to make vocational decisions based on your abilities, or to demonstrate to potential employers your ability to use mathematical reasoning. Although you cannot memorize facts for these tests, many publishers provide practice tests to familiarize you with the types of questions you may encounter and to point out skills that you need to improve before taking the official assessment.
The GMAT, or Graduate Management Admission Test, contains a critical reasoning section that tests critical thinking skills necessary for much of the coursework in a Master of Business Administration program. The best way to prepare for the test is to become familiar with the types of questions asked and take back tests and sample questions with explanations of the answers. Some people take test preparation and/or logic courses before the test. In addition, there are strategies you can try to achieve a high score on the GMAT.
The NCLEX, or National Council Licensure Examination, was designed to test preparation, knowledge and safety skills of aspiring nurses. This test has the reputation of being extremely difficult to pass. The NCLEX tests your critical thinking skills, rather than memorization skills. Therefore, you must approach the NCLEX from a different perspective than you would other exams.
Citizenship education is an essential academic core subject in societies that are built upon democratic principles. Democracies need citizens who can use critical thinking skills to decide the correct course of action in political decisions that must be made. Critical thinking by a wide range of citizens brings about informed and lively interaction between people, which is a healthy outcome in a free society.
In business and in life, success is often determined by how a person thinks things through. A reactive thinker may battle crisis after crisis, but a proactive one seems to have everything figured out beforehand and can better deal with the curve balls life throws. Often a person seems extraordinarily lucky in life and in business, but this luck can be traced to the foresight and vision that is the hallmark of good proactive thinking.
Critical thinking skills are beneficial to both young and old students. They help both in and outside of the classroom. While young students can often approach the learning of critical thinking in a more theoretical manner, many adult students appreciate a more hands-on and realistic approach to learning critical thinking skills. Teaching critical thinking skills to adults should be grounded in reality and should illustrate the benefits of critical thinking in everyday life.
Critical thinking skills, or reflective thought, teach students to devise alternative ways to solve problems in different contexts. Such skills are imperative in early childhood development, when students use role-playing to distinguish between school and family life, set goals and foster creativity. Puzzles, CDs, and video games help kids think outside the box. Disney corporation creates critical-thinking games, many of which are available in digital format. Disney games help students acquire analytical and cognitive skills necessary for mental development.
Critical thinking involves challenging basic assumptions and using facts to gain a deeper understanding of a situation or problem. It is not a skill that comes naturally to humans; critical thinking must be learned. Science is an excellent tool to teach critical thinking. Natural and life sciences rely on rational thinking, observation and experimentation. All of these are traits which are necessary to critical thinking.
Fun is the key for turning a boring learning process into a fun adventure. Most children prefer games over learning or chores. When learning and chores are turned into games, children learn without even realizing they are doing so. Parents and teachers can help children learn important mathematical concepts by providing lessons that stimulate interest and the imagination.
Science students can gain a greater engagement in their lessons through the process of critical thinking. The Foundation for Critical Thinking states that a cultivated critical thinker raises vital questions and problems while formulating them clearly and precisely. As an educator, you can empower your students to use self-questioning, hypothesizing and observing as part of their overall educational experience, so that they can formulate questions about results and behaviors.
Educational objectives, whether in nursing or any other subject, are formatted the same way. They always start with an action verb, and state what the student will learn and how the student will be able to apply what was learned. In order to ensure the student is using critical thinking skills while learning the content, it is a good idea to use Bloom's Taxonomy of Thinking to determine what the action verbs will be. Requirements to become a Registered Nurse (RN) have increased in the past several years, and along with those requirements comes a need for nurses to think…
Getting an education is one of the most important messages imparted to children and young adults. Educational choices can have far reaching effects on one's quality of life.
Graduates of practical nursing programs are required to take and pass a licensing exam called the NCLEX-PN before they can legally practice as a licensed practical nurse, or licensed vocational nurse.
Although it may be easier to grade tests that require a standard memorized answer, teaching critical thinking helps students think creatively and generate new ideas. As children grow up, they become part of the workforce and are faced with ever-changing problems and must be able to formulate new solutions. Teachers can facilitate critical thinking skills by creating in-class activities that teach children to brainstorm solutions and evaluate them.
Psychology is arguably the most empirical of the social sciences. Advances in psychology involve scientific experimentation, surveys and data collection and even neuroscience, which in turn requires understanding of biology and chemistry. It seems odd, then, that philosophy holds such a privileged place in an otherwise empirical field. As it turns out, philosophy informs many aspects of psychology and helps explain the conclusions of many psychological studies.
The business activity model is an alternative method to teaching college level intermediate accounting developed by Catanach, Croll and Grinaker. Changes in learning styles among students in general, coupled with a new certified Public Accountant (CPA) exam, prompted a review of the way they presented intermediate accounting, and led to this new interactive model.
According to author Ann Dobie, critically interpreting a text is not much different from a casual conversation with friends: What was your favorite part, or what was most interesting to you? To some, that may seem to be a pretty basic representation of an intensive form of study. Either way, the exploration of text through multiple critical approaches can be a rewarding and empowering experience.
By practicing the simple task of pattern recognition with students, teachers can help them learn to use logic. To determine the next link in a repetitive pattern, students must apply critical thinking and problem-solving skills. By developing these skills, students can use them to their advantage in other academic endeavors. Teachers can choose from a wide array of activities to effectively teach this skill to their pupils.
Many students wonder why they're required to take math courses. However, mathematical competency is crucial to career success in several ways. Young adults entering college and the workforce will understand how math proficiency can determine career direction and salary.
Diagrams, or graphic organizers, are essential to critical-thinking skills in education and beyond. Students and professionals use critical-thinking skills daily. Diagrams also expedite the sharing of information.
Many children today lack sufficient critical thinking skills. If one of your students or children seems deficient in these skills, there are ways that you can help her to develop them. Working with the child on these skills will not only help her succeed in school, it will also prepare her for life. Understanding how to think critically about the world around you is an important life skill.
Critical thinking skills include the ability to evaluate information, to formulate solutions for given problems, to analyze details for trends and patterns, and to apply previous experiences to current situations. They are vital to schooling, job performance and handling myriad problems in life. As a teacher or a parent, you should assess students not just to see if they are absorbing classroom information, but if they can apply critical thinking skills appropriately. The ability will serve them well as they enter adulthood and embark upon a career.
Although the concept of critical thinking goes back to Socrates and his Socratic method in 400 B.C.E., many educators have relied on memorization recall to assess their students. Because memorization is easier to teach and test than critical thinking skills, it has snuck into the assessments of many school districts over the years. Currently, critical thinking is emphasized in most school districts in the U.S. Critical thinking is something of a challenge to measure because it includes a complex combination of skills and is interdisciplinary. Critical thinking crosses subject matter divisions, and responses almost necessarily are not all the same,…
Critical thinking, reading, and writing are among the most important skills necessary for succeeding in high school and college. Teachers will assume that their students already have mastered basic academic skills. Now they will expect them to take more responsibility for in depth learning by reading and evaluating information then writing their conclusions and opinions in a formal, organized style. These skills can be improved by using specific metacognitive strategies at each stage of the process
More and more, parents and educators are beginning to understand the importance of reflective and critical thinking skills. Reflective thinking skills give children the ability to not only learn content better, but to examine their own behavior and learn from their mistakes. Critical thinking skills are essential for gaining a deep understanding of content, and for making well thought-out decisions later in life, such as deciding who might be the best candidate to vote for, or which business might be the best to go into. Educators can develop both reflective and critical thinking skills by using a variety of strategies.
Critical thinking is the educational terminology for a student's ability to think logically. As a student matures, his thinking is ideally trained to go from simple summarizing and repetition of facts to the ability to tie facts together and interpret them to prove a requested point. This skill is essential to life in the real world, and must be accurately determined and assessed by teachers.
For first-graders, learning to read, learning basic mathematical skills, and learning to write numbers are top priorities. But of all the basic skills young students learn, critical thinking is one of the most important. Applying, analyzing and evaluating information is one of the foundations of education and, if taught at an early age, students can master the art of thinking critically.
The differences between comprehension skills and critical thinking skills are subtle. The former can be regarded as skills that aid in understanding something that is being read or heard. The latter, on the other hand, are skills that allow the person to delve deeper into what is going on through analysis, application and evaluation.
Teaching higher level thinking skills is critical, especially with high stakes testing. Having the time to help all students master the learning objectives that they will be tested over can take up almost every minute. Education is sometimes accused of teaching the test and educators themselves lament that they feel like their hands are tied and that they are not given the opportunity to really teach more creative thinking. However, there are techniques that can ensure the best of both worlds in the same limited amount of time.
Learning how to become a critical thinker is an acquired skill that many people believe they possess but very few actually do. Most of your college educated people have been exposed to critical thinking and understand it, but it does not mean that all apply its teachings. The skill of critical thinking is very apparent when discussing a topic with someone that thinks critically and one that does not. The critical thinker will listen to all the facts without making any assumptions and will usually not believe what was said to them until research verifies it. A non-critical thinker will…
Deciding to get a master of business administration, or MBA, is a big career move and a huge commitment of time and money. Understanding the benefits of the degree can help you decide whether now is the right time.
Sometimes a teacher must convince his or her students to use critical thinking skills to complete a given task. Critical thinking is an important life skill. If students learn critical thinking, they are less likely to fall prey to scams or find themselves in less-than-desirable situations. The best reason to motivate students to learn critical thinking skills is that students who learn to think critically in the classroom will apply that skill elsewhere in their lives.
To meet the demands of today's ever evolving, rapid-fire world, students need to develop their critical thinking and problem solving skills more than ever. Educators debate among themselves about the meaning of the terms critical thinking and problem solving skills.
According to Bloom's Taxonomy of Educational Objectives, creative and critical thinking skills fall under the highest level of cognitive development. To think creatively and critically, we have to use both sides of our brain and understand many aspects of basic knowledge first. Both skills are extremely important for achievement and success in the world today, and there are easy things that parents and teachers can do to build these skills in children.
Critical thinking skills are necessary in all aspects of life. Whether it be your work or home life, dealing with school or parents, thinking critically can help you solve problems quickly and easily. Using a critical thinking graph helps you obtain those skills. Practice with different scenarios using the graph until you feel that you have a confident grasp on critical thinking skills.
James Madison, the fourth president of the United States, said, "A well-instructed people alone can be permanently a free people." It is therefore incumbent upon any society that values its liberty to educate its citizens in order to preserve freedom. A literate and well-trained population is also essential to maintaining a vibrant economy.
One of the most popular questions asked during a job interview is, "What are your strengths and weaknesses?" Many candidates make the mistake of giving a generic answer, such as "I am responsible and dependable" as a strength and "I work too hard" for a weakness. Take some time to explore and understand your strengths and weakness so that you will be prepared with a more meaningful answer.
To master their subjects, college students must approach them with healthy doses of skepticism. In his essay "Critical Thinking: What It Is and Why It Counts," Peter Facione asserts, "Critical thinking is about how you approach problems, questions, issues. It is the best way we know of to get to the truth." In college, students learn not to believe everything they hear or read, but to use their reasoning skills to investigate theories and form their own opinions. This is called critical thinking.
Math is an area in which critical thinking skills are very important. Students should focus on areas within math that promote critical thinking skills. Teachers should encourage students to think critically when dealing with math problems. Critical thinking skills in math enhance a student's ability to learn, be logical and associate math skills with the real world.
Skills of intuition and critical thinking are essential in the realm of education; their use is often required in real-world situations; and in certain instances, such as in a robbery, a person's use of these skills can mean the difference between life and death.
Memory is a fundamental tool for human learning. Children are taught to develop their skills of memorization even during their toddler years. However, true learning occurs not only when a child draws upon his memory to express his ideas, but when he demonstrates the ability to refine these ideas through interaction with others.
Children absorb information like sponges. It's important for parents, guardians and teachers of children to foster an atmosphere that encourages learning. One way to encourage learning in children is through activities that promote critical thinking skills. The best thing about most critical thinking skills activities is that they can be modified to the grade level of your child.
Integrating critical thinking into the classroom helps students improve in all content areas. The incorporation of higher-level instruction and assessments makes for an intellectually well-rounded student.
One of the most important things a person can learn is how to think critically. The sooner a person learns to examine evidence in a rigorous, critical manner, the easier it is for them to avoid being taken advantage of by crooks and charlatans. Most critical thinking strategies are Socratic and teacher-centric. It's your job to coax new thoughts from your students and demonstrate new ways to apply true reason.
The ability to think critically is essential when searching for jobs or moving into higher education. In schools, certain activities and lesson plans can foster critical thinking skills within students.
One of the most frustrating experiences that teachers encounter is the unwillingness of children to think critically. What teachers may fail to realize is that it is their job to develop critical thinking skills in their students--and that they have the ability to do so.
A writer can have a well-formed mental vision of some insightful concept, but be incapable of communicating those same ideas in intelligible, effective verse to a popular readership. Here are some ways that brilliant minds can break down their high concepts into comprehensible prose.
Learning facts and figures is important, but learning how to learn is an even more critical aspect of the educational process. Critical thinking involves a number of skills,including pattern recognition, comparison, sequencing and inductive and deductive reasoning. Developing good thinking skills can improve reading comprehension, make it easier to learn new information, and help the student make inferences and connections from the material he learns.
Teaching critical thinking skills is not a simple proposition. Educators must consider methods for inspiring interaction and reflection, prompting deeper understanding. The best lesson plans for teaching critical thinking skills incorporate open-ended projects and activities that address various modalities.
Teachers often are tempted to use direct teaching strategies to relate complex ideas to students. However, hands-on tasks typically are more effective in reaching students of various learning styles and promoting deeper understanding. Inspire more critical thinking in your classroom by integrating interactive tasks that are open-ended and multimodal.
Critical thinking at its most basic is defined as "the awakening of the intellect to the study of itself" by CriticalThinking.org. Using your experiences, asking relevant questions, researching and using reason to gather an intelligent conclusion about the world is critical thinking.
Critical-thinking skills exercises help a person to understand the reasons for one's beliefs and actions. According to OpenCourseWare in Critical Thinking, critical and creative thinking are the two basic thinking skills. Critical thinking is the ability to think clearly and rationally, whereas creativity is a matter of coming up with new and useful possibilities. Both are crucial for solving problems and discovering new knowledge. Examples of critical-thinking exercises include brain teasers, logic puzzles and values analysis exercises.
The word "discrimination" by itself suggests nothing more than the act or ability to distinguish one thing from another. In the context of human relations, however, the word can take on a negative meaning when individual characteristics become the basis for making decisions. Modern society acknowledges that discrimination, whether intentional or unintentional, is wrong. Use lesson plans to develop critical thinking skills about discrimination and help others be mindful of their behavior.
A good critical thinker knows how to separate facts from opinions, how to examine an issue from all sides, how to make rational inferences and how to withhold personal judgment or biases.
Helping students build critical thinking skills is an important task for every teacher. Students who learn how to analyze difficult concepts and think logically will score well on standardized tests and will perform better in advanced classes. Student-centered lessons that escape the classic format of passive learning and allow students to participate in learning are essential to building critical thinking skills. Teachers who use projects such as speeches, debates, persuasive writing, and analysis of world issues will help students develop the skill of forming opinions and thinking and communication skills.
What does it mean to think critically about the world around us? Socrates posed the question some 2,500 years ago in challenging the commonsense assumptions held by his fellow citizens. How can we rationally justify our claims to knowledge, Socrates probed? What does it mean to "believe" someone is virtuous, he would ask? For that matter, what is virtue? When answered, Socrates would challenge his interlocutor once more: If all you say is true, though, who then legitimates this concept of virtue? Question and answer dialogue of this sort is known as the Socratic method, a mode of critical inquiry…
Logic games and problems provide a useful and entertaining diversion that can help you to think more critically. However, for critical thinking to become a part of daily life, you must learn to think differently. Build your cognitive critical thinking skills by paying close attention to, and improving on, the way you approach problems at work and in your daily life. Retraining your brain takes time and purposeful effort, but results in a pattern of thinking that is much more cognitively critical, logical and precise.
Critical thinking skills are important to the cognitive development of children. The introduction of these skills can begin as early as pre-school and kindergarten. It is important that the skills of analyzing, comparing and synthesizing be developed at an early age so students can apply them to the appropriate situation, whether in academic or personal life.
We use critical thinking skills throughout our daily life. We have to make decisions, calculate risks, figure out situations, predict outcomes of our actions and prioritize our daily activities. Several categories of thinking comprise what is known as critical thinking. Many simple activities such as these help to sharpen critical thinking skills.
In math, critical thinking is about thinking what is being asked in a given problem. Determine what operations and procedures are used in a math problem with help from a math teacher in this free video on math lessons.
The ability to think critically enables you to create deeper interpretations of texts, which allows you to better understand the nuances of a text as well as its place in your academic tradition, that is, its impact and relative importance. Drawing connections between texts and ideas you've read about is one of the first steps of critical thinking and will dramatically improve the quality of your written analysis.
Whether you are homeschooling your children or are a busy soccer mom and head of the Parent Teacher Association, there are some easy ways to implement everyday mathematics lessons into your routine. No matter what the age or skill level of your child, you can help develop and refine his math skills by breaking down everyday tasks and having your child be a part of them.
Critical thinking in math is a unique combination of basic common sense and formulaic extrapolation. Math---unlike any other subject---attests virtually immediately to the success or failure of the student's critical thinking process. Backtracking is often quite simple, and anyone interested in honing the fine art of overall critical thinking will find that practicing mathematics is a surefire way to exercise this intellectual muscle. This holds true especially for the advanced mathematical equations and problems, some of which do not immediately yield the desired pass/fail response. Only a well developed sense of critical thinking in math permits students to enjoy these…
Censor and censure have somewhat similar meanings; therefore, they are often confused. To use them correctly, you must understand the meanings of the terms and the context of your writing. Follow these rules and examples.
Teacher mentoring is an important part of a teacher's first year. New teachers tend to struggle especially in the areas of classroom management and lesson planning. This article outlines the steps involves for giving a new teacher the support s/he needs during the first year of teaching.
Learning how to think critically in math is the foundation on which you may build a lifetime of ever more involved mathematical study. As a primary building block and approach to a plethora of math problems—as well as those in associated disciplines such as physics, architecture, chemistry, and astrophysics—the student is certain to excel, no matter the extent of the problem featured or the scope of the subject matter at hand. With the information on how to think critically in math so easily mastered, it is not surprising that parents, teachers, and even students themselves are shifting their approach to…
False assumptions can get you into all sorts of trouble. Whether you're struggling with a relationship, studying for a hard class or meeting new people, your preconceptions can make or break your ability to cope effectively with the situation. In order to stop making false assumptions, you'll have to check your ego at the door and keep your mind open to new possibilities.
Critical thinking is a form of higher level thinking, sometimes called the scientific way of thinking. Critical thinking helps you make decisions by analyzing and evaluating your facts. Work on improving your critical thinking skills, so you can make more intelligent decisions.
Reading comprehension requires you to connect with the reading assignment. Marking and annotating the text gets you to engage and interact with it in a physical way. Your pencil, pen and highlighter are terrific tools you can use to improve reading comprehension and remember the assigned text. Get the most out of a reading assignment by marking it up.
Occasionally, a teacher may ask you to write a summary of your reading assignment. But you don't have to wait to be assigned to write a summary. Making a habit of summarizing what you read is a useful tool to improve reader comprehension, and also a valuable critical-thinking exercise. Summarizing a reading assignment increases recall and condenses an author's ideas down to a few sentences.
You can instill critical thinking skills in your students by encouraging them to apply their knowledge, question what they read and look behind the surface message of media. You can teach critical thinking skills within any subject matter.
The MCAT, or Medical College Admission Test, includes a one-hour writing section that tests aspiring physicians' ability to think critically and communicate effectively. Medical school admissions officers recognize the importance of communication in the delivery of good medical care. Doctors must be able to speak effectively and clearly to patients, colleagues and, at times, to political groups or the media.
Some of the most successful criminals were successful because they were able to think like a cop, or have a cop's mindset. This really isn't too hard to do if you know anything about the profession, and that is becoming more and more easy with the types of programming on television these days. Having the mindset of a cop is beneficial in many other endeavors and not necessarily for devious pursuits. | http://www.ehow.com/critical-thinking-skills/ | 13 |
18 | Here's a very simple GNU Make function: it takes three arguments and makes a
'date' out of them by inserting / between the first and second and second and third arguments:
make_date = $1/$2/$3
The first thing to notice is that make_date is defined just like any other GNU Make macro (you must use = and not := for reasons we'll see below).
To use make_date we $(call) it like this:
today = $(call make_date,19,12,2007)
That will result in today containing 19/12/2007.
The macro uses special macros $1, $2, and $3. These macros contain the argument specified in the $(call). $1 is the first argument, $2 the second and so on.
There's no maximum number of arguments, but if you go above 10 then you need parens: you can't write $10 instead of $(10). There's also no minimum number. Arguments that are missing are just undefined and will typically be treated as an empty string.
The special argument $0 contains the name of the function. In the example above $0 is make_date.
Since functions are just macros with some special automatic macros filled in (if you use the $(origin) function on any of the argument macros ($1 etc.) you'll find that they are classed as automatic just like $@), you can use GNU Make built in functions to build up complex functions.
Here's a function that turns every / into a \ in a path"
unix_to_dos = $(subst /,\,$1)
using the $(subst). Don't be worried about the use of / and \ there. GNU Make does very
little escaping and a literal \ is most of the time just a \.
Some argument handling gotchas
When GNU Make is processing a $(call) it starts by splitting the argument list on commas to set $1 etc. The arguments are expanded so that $1 etc. are completely expanded before they are ever referenced (it's as if GNU Make used := to set them). This means that if an argument has a side-effect (such as calling $(shell)) then that side-effect will always occur as soon as the $(call) is executed, even if the argument was never actually used by the function.
One common problem is that if an argument contains a comma the splitting of
arguments can go wrong. For example, here's a simple function that swaps its two arguments:
swap = $2 $1
If you do $(call swap,first,argument,second) GNU Make doesn't have any way to know that the first argument was meant to be first,argument and swap ends up returning argument first instead of second first,argument.
There are two ways around this. You could simply hide the first argument inside a macro. Since GNU Make doesn't expand the arguments until after splitting a comma inside a macro will not cause any confusion:
FIRST := first,argument
SWAPPED := $(call swap,$(FIRST),second)
The other way to do this is to create a simple macro that just contains a comma and use that instead:
c := ,
SWAPPED := $(call swap,first$cargument,second)
Or even call that macro , and use it (with parens):
, := ,
SWAPPED := $(call swap,first$(,)argument,second)
Calling built-in functions
It's possible to use the $(call) syntax with built in GNU Make functions. For example, you could call $(warning) like this:
This is useful because it means that you can pass any function name as an argument to a user-defined function and $(call) it without needing to know if it's built-in or not.
This gives you the ability to created functions that act on functions. The classic functional programming map function (which applies a function to every member of a list returning the resulting list) can be created | http://www.agileconnection.com/article/gnu-make-user-defined-functions | 13 |
17 | Fallacies are defects in an argument other than false premises which cause an argument to be invalid, unsound or weak. Fallacies can be separated into two general groups: formal and informal. A formal fallacy is a defect which can be identified merely be looking at the logical structure of an argument rather than any specific statements.
Formal fallacies are only found only in deductive arguments with identifiable forms. One of the things which makes them appear reasonable is the fact that they look like and mimic valid logical arguments, but are in fact invalid. Here is an example:
1. All humans are mammals. (premise)
2. All cats are mammals. (premise)
3. All humans are cats. (conclusion)
Both premises in this argument are true, but the conclusion is false. The defect is a formal fallacy, and can be demonstrated by reducing the argument to its bare structure:
1. All A are C
2. All B are C
3. All A are B
It does not really matter what A, B and C stand for we could replace them with wines, milk and beverages. The argument would still be invalid and for the exact same reason. Sometimes, therefore, it is helpful to reduce an argument to its structure and ignore content in order to see if it is valid.
Informal fallacies are defects which can be identified only through an analysis of the actual content of the argument rather than through its structure. Here is an example:
1. Geological events produce rock. (premise)
2. Rock is a type of music. (premise)
3. Geological events produce music. (conclusion)
The premises in this argument are true, but clearly the conclusion is false. Is the defect a formal fallacy or an informal fallacy? To see if this is actually a formal fallacy, we have to break it down to its basic structure:
1. A = B
2. B = C
3. A = C
As we can see, this structure is valid, therefore the defect cannot be a formal fallacy identifiable from the structure. Therefore, the defect must be an informal fallacy identifiable from the content. In fact, when we examine the content, we find that a key term, rock, is being used with two different definitions (the technical term for this sort of fallacy is Equivocation).
Informal fallacies can work in several ways. Some distract the reader from what is really going on. Some, like in the above example, make use of vagueness or ambiguity to cause confusion. Some appeal to emotions rather than logic and reason.
Categorizing fallacies can be done in a number of different methods. Aristotle was the first to try and systematically describe and categorize fallacies, identifying thirteen fallacies divided into two groups. Since then many more have been described and the categorization is more complicated. Thus, while the categorization used here should prove. | http://atheism.about.com/od/logicalarguments/a/fallacy.htm | 13 |
24 | |This is the print version of Geometry
You won't see this message or any elements not part of the book's content when you print or preview this page.
Part I- Euclidean Geometry
Chapter 1: Points, Lines, Line Segments and Rays
Points and lines are two of the most fundamental concepts in Geometry, but they are also the most difficult to define. We can describe intuitively their characteristics, but there is no set definition for them: they, along with the plane, are the undefined terms of geometry. All other geometric definitions and concepts are built on the undefined ideas of the point, line and plane. Nevertheless, we shall try to define them.
A point is an exact location in space. Points are dimensionless. That is, a point has no width, length, or height. We locate points relative to some arbitrary standard point, often called the "origin". Many physical objects suggest the idea of a point. Examples include the tip of a pencil, the corner of a cube, or a dot on a sheet of paper.
As for a line segment, we specify a line with two points. Starting with the corresponding line segment, we find other line segments that share at least two points with the original line segment. In this way we extend the original line segment indefinitely. The set of all possible line segments findable in this way constitutes a line. A line extends indefinitely in a single dimension. Its length, having no limit, is infinite. Like the line segments that constitute it, it has no width or height. You may specify a line by specifying any two points within the line. For any two points, only one line passes through both points. On the other hand, an unlimited number of lines pass through any single point.
We construct a ray similarly to the way we constructed a line, but we extend the line segment beyond only one of the original two points. A ray extends indefinitely in one direction, but ends at a single point in the other direction. That point is called the end-point of the ray. Note that a line segment has two end-points, a ray one, and a line none.
A point exists in zero dimensions. A line exists in one dimension, and we specify a line with two points. A plane exists in two dimensions. We specify a plane with three points. Any two of the points specify a line. All possible lines that pass through the third point and any point in the line make up a plane. In more obvious language, a plane is a flat surface that extends indefinitely in its two dimensions, length and width. A plane has no height.
Space exists in three dimensions. Space is made up of all possible planes, lines, and points. It extends indefinitely in all directions.
Mathematics can extend space beyond the three dimensions of length, width, and height. We then refer to "normal" space as 3-dimensional space. A 4-dimensional space consists of an infinite number of 3-dimensional spaces. Etc.
[How we label and reference points, lines, and planes.]
Chapter 2: Angles
An angle is the union of two rays with a common endpoint, called the vertex. The angles formed by vertical and horizontal lines are called right angles; lines, segments, or rays that intersect in right angles are said to be perpendicular.
Angles, for our purposes, can be measured in either degrees (from 0 to 360) or radians (from 0 to ). Angles length can be determined by measuring along the arc they map out on a circle. In radians we consider the length of the arc of the circle mapped out by the angle. Since the circumference of a circle is , a right angle is radians. In degrees, the circle is 360 degrees, and so a right angle would be 90 degrees.
Angles are named in several ways.
- By naming the vertex of the angle (only if there is only one angle formed at that vertex; the name must be non-ambiguous)
- By naming a point on each side of the angle with the vertex in between.
- By placing a small number on the interior of the angle near the vertex.
Classification of Angles by Degree Measure
- an angle is said to be acute if it measures between 0 and 90 degrees, exclusive.
- an angle is said to be right if it measures 90 degrees.
- notice the small box placed in the corner of a right angle, unless the box is present it is not assumed the angle is 90 degrees.
- all right angles are congruent
- an angle is said to be obtuse if it measures between 90 and 180 degrees, exclusive.
Special Pairs of Angles
- adjacent angles
- adjacent angles are angles with a common vertex and a common side.
- adjacent angles have no interior points in common.
- complementary angles
- complementary angles are two angles whose sum is 90 degrees.
- complementary angles may or may not be adjacent.
- if two complementary angles are adjacent, then their exterior sides are perpendicular.
- supplementary angles
- two angles are said to be supplementary if their sum is 180 degrees.
- supplementary angles need not be adjacent.
- if supplementary angles are adjacent, then the sides they do not share form a line.
- linear pair
- if a pair of angles is both adjacent and supplementary, they are said to form a linear pair.
- vertical angles
- angles with a common vertex whose sides form opposite rays are called vertical angles.
- vertical angles are congruent.
Side-Side-Side (SSS) (Postulate 12) If three sides of one triangle are congruent to three sides of a second triangle, then the two triangles are congruent.
Side-Angle-Side (SAS) (Postulate 13)
If two sides and the included angle of a second triangle, then the two triangles are congruent.
If two angles and the included side of one triangle are congruent to two angles and the included side of a second triangle, then two triangles are congruent.
If two angles and a non-included side of one triangle are congruent to two angles and the corresponding non-included side of a second triangle, then the two triangles are congruent.
NO - Angle-Side-Side (ASS)
The "ASS" postulate does not work, unlike the other ones. A way that students can remember this is that "ass" is not a nice word, so we don't use it in geometry (since it does not work).
There are two approaches to furthering knowledge: reasoning from known ideas and synthesizing observations. In inductive reasoning you observe the world, and attempt to explain based on your observations. You start with no prior assumptions. Deductive reasoning consists of logical assertions from known facts.
What you need to know
Before one can start to understand logic, and thereby begin to prove geometric theorems, one must first know a few vocabulary words and symbols.
Conditional: a conditional is something which states that one statement implies another. A conditional contains two parts: the condition and the conclusion, where the former implies the latter. A conditional is always in the form "If statement 1, then statement 2." In most mathematical notation, a conditional is often written in the form p ⇒ q, which is read as "If p, then q" where p and q are statements.
Converse: the converse of a logical statement is when the conclusion becomes the condition and vice versa; i.e., p ⇒ q becomes q ⇒ p. For example, the converse of the statement "If someone is a woman, then they are a human" would be "If someone is a human, then they are a woman." The converse of a conditional does not necessarily have the same truth value as the original, though it sometimes does, as will become apparent later.
AND: And is a logical operator which is true only when both statements are true. For example, the statement "Diamond is the hardest substance known to man AND a diamond is a metal" is false. While the former statement is true, the latter is not. However, the statement "Diamond is the hardest substance known to man AND diamonds are made of carbon" would be true, because both parts are true.
OR: If two statements are joined together by "or," then the truth of the "or" statement is dependant upon whether one or both of the statements from which it is composed is true. For example, the statement "Tuesday is the day after Monday OR Thursday is the day after Saturday" would have a truth value of "true," because even though the latter statement is false, the former is true.
NOT: If a statement is preceded by "NOT," then it is evaluating the opposite truth value of that statement. The symbol for "NOT" is For example, if the statement p is "Elvis is dead," then ¬p would be "Elvis is not dead." The concept of "NOT" can cause some confusion when it relates to statements which contain the word "all." For example, if r is "¬". "All men have hair," then ¬r would be "All men do not have hair" or "No men have hair." Do not confuse this with "Not all men have hair" or "Some men have hair." The "NOT" should apply to the verb in the statement: in this case, "have." ¬p can also be written as NOT p or ~p. NOT p may also be referred to as the "negation of p."
Inverse: The inverse of a conditional says that the negation of the condition implies the negation of the conclusion. For example, the inverse of p ⇒ q is ¬p ⇒ ¬q. Like a converse, an inverse does not necessarily have the same truth value as the original conditional.
Biconditional: A biconditional is conditional where the condition and the conclusion imply one another. A biconditional starts with the words "if and only if." For example, "If and only if p, then q" means both that p implies q and that q implies p.
Premise: A premise is a statement whose truth value is known initially. For example, if one were to say "If today is Thursday, then the cafeteria will serve burritos," and one knew that what day it was, then the premise would be "Today is Thursday" or "Today is not Thursday."
⇒: The symbol which denotes a conditional. p ⇒ q is read as "if p, then q."
Iff: Iff is a shortened form of "if and only if." It is read as "if and only if."
⇔: The symbol which denotes a biconditonal. p ⇔ q is read as "If and only if p, then q."
∴: The symbol for "therefore." p ∴ q means that one knows that p is true (p is true is the premise), and has logically concluded that q must also be true.
∧: The symbol for "and."
∨: The symbol for "or."
There are a few forms of deductive logic. One of the most common deductive logical arguments is modus ponens, which states that:
- p ⇒ q
- p ∴ q
- (If p, then q)
- (p, therefore q)
An example of modus ponens:
- If I stub my toe, then I will be in pain.
- I stub my toe.
- Therefore, I am in pain.
Another form of deductive logic is modus tollens, which states the following.
- p ⇒ q
- ¬q ∴ ¬p
- (If p, then q)
- (not q, therefore not p)
Modus tollens is just as valid a form of logic as modus ponens. The following is an example which uses modus tollens.
- If today is Thursday, then the cafeteria will be serving burritos.
- The cafeteria is not serving burritos, therefore today is not Thursday.
Another form of deductive logic is known as the If-Then Transitive Property. Simply put, it means that there can be chains of logic where one thing implies another thing. The If-Then Transitive Property states:
- p ⇒ q
- (q ⇒ r) ∴ (p ⇒ r)
- (If p, then q)
- ((If q, then r), therefore (if p, then r))
For example, consider the following chain of if-then statements.
- If today is Thursday, then the cafeteria will be serving burritos.
- If the cafeteria will be serving burritos, then I will be happy.
- Therefore, if today is Thursday, then I will be happy.
Inductive reasoning is a logical argument which does not definitely prove a statement, but rather assumes it. Inductive reasoning is used often in life. Polling is an example of the use of inductive reasoning. If one were to poll one thousand people, and 300 of those people selected choice A, then one would infer that 30% of any population might also select choice A. This would be using inductive logic, because it does not definitively prove that 30% of any population would select choice A.
Because of this factor of uncertainty, inductive reasoning should be avoided when possible when attempting to prove geometric properties.
Truth tables are a way that one can display all the possibilities that a logical system may have when given certain premises. The following is a truth table with two premises (p and q), which shows the truth value of some basic logical statements. (NOTE: T = true; F = false)
|p||q||¬p||¬q||p ⇒ q||p ⇔ q||p ∧ q||p ∨ q|
Unlike science which has theories, mathematics has a definite notion of proof. Mathematics applies deductive reasoning to create a series of logical statements which show that one thing implies another.
Consider a triangle, which we define as a shape with three vertices joined by three lines. We know that we can arbitrarily pick some point on a page, and make that into a vertex. We repeat that process and pick a second point. Using a ruler, we can connect these two points. We now make a third point, and using the ruler connect it to each of the other points. We have constructed a triangle.
In mathematics we formalize this process into axioms, and carefully lay out the sequence of statements to show what follows. All definitions are clearly defined. In modern mathematics, we are always working within some system where various axioms hold.
The most common form of explicit proof in highschool geometry is a two column proof consists of five parts: the given, the proposition, the statement column, the reason column, and the diagram (if one is given).
Example of a Two-Column Proof
Now, suppose a problem tells you to solve for , showing all steps made to get to the answer. A proof shows how this is done:
Prove: x = 1
|Property of subtraction|
We use "Given" as the first reason, because it is "given" to us in the problem.
Written proofs (also known as informal proofs, paragraph proofs, or 'plans for proof') are written in paragraph form. Other than this formatting difference, they are similar to two-column proofs.
Sometimes it is helpful to start with a written proof, before formalizing the proof in two-column form. If you're having trouble putting your proof into two column form, try "talking it out" in a written proof first.
Example of a Written Proof
We are given that x + 1 = 2, so if we subtract one from each side of the equation (x + 1 - 1 = 2 - 1), then we can see that x = 1 by the definition of subtraction.
A flowchart proof or more simply a flow proof is a graphical representation of a two-column proof. Each set of statement and reasons are recorded in a box and then arrows are drawn from one step to another. This method shows how different ideas come together to formulate the proof.
Postulates in geometry are very similar to axioms, self-evident truths, and beliefs in logic, political philosophy and personal decision-making. The five postulates of Euclidean Geometry define the basic rules governing the creation and extension of geometric figures with ruler and compass. Together with the five axioms (or "common notions") and twenty-three definitions at the beginning of Euclid's Elements, they form the basis for the extensive proofs given in this masterful compilation of ancient Greek geometric knowledge. They are as follows:
- A straight line may be drawn from any given point to any other.
- A straight line may be extended to any finite length.
- A circle may be described with any given point as its center and any distance as its radius.
- All right angles are congruent.
- If a straight line intersects two other straight lines, and so makes the two interior angles on one side of it together less than two right angles, then the other straight lines will meet at a point if extended far enough on the side on which the angles are less than two right angles.
Postulate 5, the so-called Parallel Postulate was the source of much annoyance, probably even to Euclid, for being so relatively prolix. Mathematicians have a peculiar sense of aesthetics that values simplicity arising from simplicity, with the long complicated proofs, equations and calculations needed for rigorous certainty done behind the scenes, and to have such a long sentence amidst such other straightforward, intuitive statements seems awkward. As a result, many mathematicians over the centuries have tried to prove the results of the Elements without using the Parallel Postulate, but to no avail. However, in the past two centuries, assorted non-Euclidean geometries have been derived based on using the first four Euclidean postulates together with various negations of the fifth.
Chapter 7. Vertical Angles
Vertical angles are a pair of angles with a common vertex whose sides form opposite rays. An extensively useful fact about vertical angles is that they are congruent. Aside from saying that any pair of vertical angles "obviously" have the same measure by inspection, we can prove this fact with some simple algebra and an observation about supplementary angles. Let two lines intersect at a point, and angles A1 and A2 be a pair of vertical angles thus formed. At the point of intersection, two other angles are also formed, and we'll call either one of them B1 without loss of generality. Since B1 and A1 are supplementary, we can say that the measure of B1 plus the measure of A1 is 180. Similarly, the measure of B1 plus the measure of A2 is 180. Thus the measure of A1 plus the measure of B1 equals the measure of A2 plus the measure of B1, by substitution. Then by subracting the measure of B1 from each side of this equality, we have that the measure of A1 equals the measure of A2.
Parallel Lines in a Plane
Two coplanar lines are said to be parallel if they never intersect. For any given point on the first line, its distance to the second line is equal to the distance between any other point on the first line and the second line. The common notation for parallel lines is "||" (a double pipe); it is not unusual to see "//" as well. If line m is parallel to line n, we write "m || n". Lines in a plane either coincide, intersect in a point, or are parallel. Controversies surrounding the Parallel Postulate lead to the development of non-Euclidean geometries.
Parallel Lines and Special Pairs of Angles
When two (or more) parallel lines are cut by a transversal, the following angle relationships hold:
- corresponding angles are congruent
- alternate exterior angles are congruent
- same-side interior angles are supplementary
Theorems Involving Parallel Lines
- If a line in a plane is perpendicular to one of two parallel lines, it is perpendicular to the other line as well.
- If a line in a plane is parallel to one of two parallel lines, it is parallel to both parallel lines.
- If three or more parallel lines are intersected by two or more transversals, then they divide the transversals proportionally.
Congruent shapes are the same size with corresponding lengths and angles equal. In other words, they are exactly the same size and shape. They will fit on top of each other perfectly. Therefore if you know the size and shape of one you know the size and shape of the others. For example:
Each of the above shapes is congruent to each other. The only difference is in their orientation, or the way they are rotated. If you traced them onto paper and cut them out, you could see that they fit over each other exactly.
Having done this, right away we can see that, though the angles correspond in size and position, the sides do not. Therefore it is proved the triangles are not congruent.
Similar shapes are like congruent shapes in that they must be the same shape, but they don't have to be the same size. Their corresponding angles are congruent and their corresponding sides are in proportion.
Methods of Determining Congruence
Two triangles are congruent if:
- each pair of corresponding sides is congruent
- two pairs of corresponding angles are congruent and a pair of corresponding sides are congruent
- two pairs of corresponding sides and the angles included between them are congruent
Tips for Proofs
Commonly used prerequisite knowledge in determining the congruence of two triangles includes:
- by the reflexive property, a segment is congruent to itself
- vertical angles are congruent
- when parallel lines are cut by a transversal corresponding angles are congruent
- when parallel lines are cut by a transversal alternate interior angles are congruent
- midpoints and bisectors divide segments and angles into two congruent parts
For two triangles to be similar, all 3 corresponding angles must be congruent, and all three sides must be proportionally equal. Two triangles are similar if...
- Two angles of each triangle are congruent.
- The acute angle of a right triangle is congruent to the acute angle of another right triangle.
- The two triangles are congruent. Note here that congruency implies similarity.
A quadrilateral is a polygon that has four sides.
Special Types of Quadrilaterals
- A parallelogram is a quadrilateral having two pairs of parallel sides.
- A square, a rhombus, and a rectangle are all examples of parallelograms.
- A rhombus is a quadrilateral of which all four sides are the same length.
- A rectangle is a parallelogram of which all four angles are 90 degrees.
- A square is a quadrilateral of which all four sides are of the same length, and all four angles are 90 degrees.
- A square is a rectangle, a rhombus, and a parallelogram.
- A trapezoid is a quadrilateral which has two parallel sides (U.S.)
- U.S. usage: A trapezium is a quadrilateral which has no parallel sides.
- U.K usage: A trapezium is a quadrilateral with two parallel sides (same as US trapezoid definition).
- A kite is an quadrilateral with two pairs of congruent adjacent sides.
One of the most important properties used in proofs is that the sum of the angles of the quadrilateral is always 360 degrees. This can easily be proven too:
If you draw a random quadrilateral, and one of its diagonals, you'll split it up into two triangles. Given that the sum of the angles of a triangle is 180 degrees, you can sum them up, and it'll give 360 degrees.
A parallelogram is a geometric figure with two pairs of parallel sides. Parallelograms are a special type of quadrilateral. The opposite sides are equal in length and the opposite angles are also equal. The area is equal to the product of any side and the distance between that side and the line containing the opposite side.
Properties of Parallelograms
The following properties are common to all parallelograms (parallelogram, rhombus, rectangle, square)
- both pairs of opposite sides are parallel
- both pairs of opposite sides are congruent
- both pairs of opposite angles are congruent
- the diagonals bisect each other
- A rhombus is a parallelogram with four congruent sides.
- The diagonals of a rhombus are perpendicular.
- Each diagonal of a rhombus bisects two angles the rhombus.
- A rhombus may or may not be a square.
- A square is a parallelogram with four right angles and four congruent sides.
- A square is both a rectangle and a rhombus and inherits all of their properties.
A Trapezoid (American English) or Trapezium (British English) is a quadrilateral that has two parallel sides and two non parallel sides.
Some properties of trapezoids:
- The interior angles sum to 360° as in any quadrilateral.
- The parallel sides are unequal.
- Each of the parallel sides is called a base (b) of the trapezoid. The two angles that join one base are called 'base angles'.
- If the two non-parallel sides are equal, the trapezoid is called an isosceles trapezoid.
- In an isosceles trapezoid, each pair of base angles are equal.
- If one pair of base angles of a trapezoid are equal, the trapezoid is isosceles.
- A line segment connecting the midpoints of the non-parallel sides is called the median (m) of the trapeziod.
- The median of a trapezoid is equal to one half the sum of the bases (called b1 and b2).
- A line segment perpendicular to the bases is called an altitude (h) of the trapezoid.
The area (A) of a trapezoid is equal to the product of an altitude and the median.
Recall though that the median is half of the sum of the bases.
Substituting for m, we get:
A circle is a set of all points in a plane that are equidistant from a single point; that single point is called the centre of the circle and the distance between any point on circle and the centre is called radius of the circle.
a chord is an internal segment of a circle that has both of its endpoints on the circumference of the circle.
- the diameter of a circle is the largest chord possible
a secant of a circle is any line that intersects a circle in two places.
- a secant contains any chord of the circle
a tangent to a circle is a line that intersects a circle in exactly one point, called the point of tangency.
- at the point of tangency the tangent line and the radius of the circle are perpendicular
Chapter 16. Circles/Arcs
An arc is a segment of the perimeter of a given circle. The measure of an arc is measured as an angle, this could be in radians or degrees (more on radians later). The exact measure of the arc is determined by the measure of the angle formed when a line is drawn from the center of the circle to each end point. As an example the circle below has an arc cut out of it with a measure of 30 degrees.
As I mentioned before an arc can be measured in degrees or radians. A radian is merely a different method for measuring an angle. If we take a unit circle (which has a radius of 1 unit), then if we take an arc with the length equal to 1 unit, and draw line from each endpoint to the center of the circle the angle formed is equal to 1 radian. this concept is displayed below, in this circle an arc has been cut off by an angle of 1 radian, and therefore the length of the arc is equal to because the radius is 1.
From this definition we can say that on the unit circle a single radian is equal to radians because the perimeter of a unit circle is equal to . Another useful property of this definition that will be extremely useful to anyone who studies arcs is that the length of an arc is equal to its measure in radians multiplied by the radius of the circle.
Converting to and from radians is a fairly simple process. 2 facts are required to do so, first a circle is equal to 360 degrees, and it is also equal to . using these 2 facts we can form the following formula:
, thus 1 degree is equal to radians.
From here we can simply multiply by the number of degrees to convert to radians. for example if we have 20 degrees and want to convert to radians then we proceed as follows:
The same sort of argument can be used to show the formula for getting 1 radian.
, thus 1 radian is equal to
A tangent is a line in the same plane as a given circle that meets that circle in exactly one point. That point is called the point of tangency. A tangent cannot pass through a circle; if it does, it is classified as a chord. A secant is a line containing a chord.
A common tangent is a line tangent to two circles in the same plane. If the tangent does not intersect the line containing and connecting the centers of the circles, it is an external tangent. If it does, it is an internal tangent.
Two circles are tangent to one another if in a plane they intersect the same tangent in the same point.
Sector of a circle
A sector of a circle can be thought of as a pie piece. In the picture below, a sector of the circle is shaded yellow.
To find the area of a sector, find the area of the whole circle and then multiply by the angle of the sector over 360 degrees.
A more intuitive approach can be used when the sector is half the circle. In this case the area of the sector would just be the area of the circle divided by 2.
- See Angle
Addition Property of Equality
For any real numbers a, b, and c, if a = b, then a + c = b + c.
A figure is an angle if and only if it is composed of two rays which share a common endpoint. Each of these rays (or segments, as the case may be) is known as a side of the angle (For example, in the illustration at right), and the common point is known as the angle's vertex (point B in the illustration). Angles are measured by the difference of their slopes. The units for angle measure are radians and degrees. Angles may be classified by their degree measure.
- Acute Angle: an angle is an acute angle if and only if it has a measure of less than 90°
- Right Angle: an angle is an right angle if and only if it has a measure of exactly 90°
- Obtuse Angle: an angle is an obtuse angle if and only if it has a measure of greater than 90°
Angle Addition Postulate
If P is in the interior of an angle , then
Center of a circle
Point P is the center of circle C if and only if all points in circle C are equidistant from point P and point P is contained in the same plane as circle C.
A collection of points is said to be a circle with a center at point P and a radius of some distance r if and only if it is the collection of all points which are a distance of r away from point P and are contained by a plane which contain point P.
A polygon is said to be concave if and only if it contains at least one interior angle with a measure greater than 180° exclusively and less than 360° exclusively.
Two angles formed by a transversal intersecting with two lines are corresponding angles if and only if one is on the inside of the two lines, the other is on the outside of the two lines, and both are on the same side of the transversal.
Corresponding Angles Postulate
If two lines cut by a transversal are parallel, then their corresponding angles are congruent.
Corresponding Parts of Congruent Triangles are Congruent Postulate
The Corresponding Parts of Congruent Triangles are Congruent Postulate (CPCTC) states:
- If ∆ABC ≅ ∆XYZ, then all parts of ∆ABC are congruent to their corresponding parts in ∆XYZ. For example:
- ∠ABC ≅ ∠XYZ
- ∠BCA ≅ ∠YZX
- ∠CAB ≅ ∠ZXY
CPCTC also applies to all other parts of the triangles, such as a triangle's altitude, median, circumcenter, et al.
A line segment is the diameter of a circle if and only if it is a chord of the circle which contains the circle's center.
- See Circle
and if they cross they are congruent
A collection of points is a line if and only if the collection of points is perfectly straight (aligned), is infinitely long, and is infinitely thin. Between any two points on a line, there exists an infinite number of points which are also contained by the line. Lines are usually written by two points in the line, such as line AB, or
A collection of points is a line segment if and only if it is perfectly straight, is infinitely thin, and has a finite length. A line segment is measured by the shortest distance between the two extreme points on the line segment, known as endpoints. Between any two points on a line segment, there exists an infinite number of points which are also contained by the line segment.
Two lines or line segments are said to be parallel if and only if the lines are contained by the same plane and have no points in common if continued infinitely.
Two planes are said to be parallel if and only if the planes have no points in common when continued infinitely.
Two lines that intersect at a 90° angle.
Given a line, and a point P not in line , then there is one and only one line that goes through point P perpendicular to
An object is a plane if and only if it is a two-dimensional object which has no thickness or curvature and continues infinitely. A plane can be defined by three points. A plane may be considered to be analogous to a piece of paper.
A point is a zero-dimensional mathematical object representing a location in one or more dimensions. A point has no size; it has only location.
A polygon is a closed plane figure composed of at least 3 straight lines. Each side has to intersect another side at their respective endpoints, and that the lines intersecting are not collinear.
The radius of a circle is the distance between any given point on the circle and the circle's center.
- See Circle
A ray is a straight collection of points which continues infinitely in one direction. The point at which the ray stops is known as the ray's endpoint. Between any two points on a ray, there exists an infinite number of points which are also contained by the ray.
The points on a line can be matched one to one with the real numbers. The real number that corresponds to a point is the point's coordinate. The distance between two points is the absolute value of the difference between the two coordinates of the two points.
Geometry/Synthetic versus analytic geometry
- Two and Three-Dimensional Geometry and Other Geometric Figures
Perimeter and Arclength
Perimeter of Circle
The circles perimeter can be calculated using the following formula
where and the radius of the circle.
Perimeter of Polygons
The perimeter of a polygon with number of sides abbreviated can be caculated using the following formula
Arclength of Circles
The arclength of a given circle with radius can be calculated using
where is the angle given in radians.
Arclength of Curves
If a curve in have a parameter form for , then the arclength can be calculated using the following fomula
Derivation of formula can be found using differential geometry on infinitely small triangles.
Area of Circles
The method for finding the area of a circle is
Where π is a constant roughly equal to 3.14159265358978 and r is the radius of the circle; a line drawn from any point on the circle to its center.
Area of Triangles
Three ways of calculating the area inside of a triangle are mentioned here.
If one of the sides of the triangle is chosen as a base, then a height for the triangle and that particular base can be defined. The height is a line segment perpendicular to the base or the line formed by extending the base and the endpoints of the height are the corner point not on the base and a point on the base or line extending the base. Let B = the length of the side chosen as the base. Let
h = the distance between the endpoints of the height segment which is perpendicular to the base. Then the area of the triangle is given by:
This method of calculating the area is good if the value of a base and its corresponding height in the triangle is easily determined. This is particularly true if the triangle is a right triangle, and the lengths of the two sides sharing the 90o angle can be determined.
- , also known as Heron's Formula
If the lengths of all three sides of a triangle are known, Hero's formula may be used to calculate the area of the triangle. First, the semiperimeter, s, must be calculated by dividing the sum of the lengths of all three sides by 2. For a triangle having side lengths a, b, and c :
Then the triangle's area is given by:
If the triangle is needle shaped, that is, one of the sides is very much shorter than the other two then it can be difficult to compute the area because the precision needed is greater than that available in the calculator or computer that is used. In otherwords Heron's formula is numerically unstable. Another formula that is much more stable is:
where , , and have been sorted so that .
In a triangle with sides length a, b, and c and angles A, B, and C opposite them,
This formula is true because in the formula . It is useful because you don't need to find the height from an angle in a separate step, and is also used to prove the law of sines (divide all terms in the above equation by a*b*c and you'll get it directly!)
Area of Rectangles
The area calculation of a rectangle is simple and easy to understand. One of the sides is chosen as the base, with a length b. An adjacent side is then the height, with a length h, because in a rectangle the adjacent sides are perpendicular to the side chosen as the base. The rectangle's area is given by:
Sometimes, the baselength may be referred to as the length of the rectangle, l, and the height as the width of the rectangle, w. Then the area formula becomes:
Regardless of the labels used for the sides, it is apparent that the two formulas are equivalent.
Of course, the area of a square with sides having length s would be:
Area of Parallelograms
The area of a parallelogram can be determined using the equation for the area of a rectangle. The formula is:
A is the area of a parallelogram. b is the base. h is the height.
The height is a perpendicular line segment that connects one of the vertices to its opposite side (the base).
Area of Rhombus
Remember in a rombus all sides are equal in length.
and represent the diagonals.
Area of Trapezoids
The area of a trapezoid is derived from taking the arithmetic mean of its two parallel sides to form a rectangle of equal area.
Where and are the lengths of the two parallel bases.
Area of Kites
The area of a kite is based on splitting the kite into four pieces by halving it along each diagonal and using these pieces to form a rectangle of equal area.
Where a and b are the diagonals of the kite.
Alternatively, the kite may be divided into two halves, each of which is a triangle, by the longer of its diagonals, a. The area of each triangle is thus
Where b is the other (shorter) diagonal of the kite. And the total area of the kite (which is composed of two identical such triangles) is
Which is the same as
Areas of other Quadrilaterals
The areas of other quadrilaterals are slightly more complex to calculate, but can still be found if the quadrilateral is well-defined. For example, a quadrilateral can be divided into two triangles, or some combination of triangles and rectangles. The areas of the constituent polygons can be found and added up with arithmetic.
Volume is like area expanded out into 3 dimensions. Area deals with only 2 dimensions. For volume we have to consider another dimension. Area can be thought of as how much space some drawing takes up on a flat piece of paper. Volume can be thought of as how much space an object takes up.
|Common equations for volume:|
|A cube:||s = length of a side|
|A rectangular prism:||l = length, w = width, h = height|
|A cylinder (circular prism):||r = radius of circular face, h = height|
|Any prism that has a constant cross sectional area along the height:||A = area of the base, h = height|
|A sphere:||r = radius of sphere
which is the integral of the Surface Area of a sphere
|An ellipsoid:||a, b, c = semi-axes of ellipsoid|
|A pyramid:||A = area of the base, h = height of pyramid|
|A cone (circular-based pyramid):||r = radius of circle at base, h = distance from base to tip
(The units of volume depend on the units of length - if the lengths are in meters, the volume will be in cubic meters, etc.)
The volume of any solid whose cross sectional areas are all the same is equal to that cross sectional area times the distance the centroid(the center of gravity in a physical object) would travel through the solid.
If two solids are contained between two parallel planes and every plane parallel to these two plane has equal cross sections through these two solids, then their volumes are equal.
A Polygon is a two-dimensional figure, meaning all of the lines in the figure are contained within one plane. They are classified by the number of angles, which is also the number of sides.
One key point to note is that a polygon must have at least three sides. Normally, three to ten sided figures are referred to by their names (below), while figures with eleven or more sides is an n-gon, where n is the number of sides. Hence a forty-sided polygon is called a 40-gon.
A polygon with three angles and sides.
A polygon with four angles and sides.
A polygon with five angles and sides.
A polygon with six angles and sides.
A polygon with seven angles and sides.
A polygon with eight angles and sides.
A polygon with nine angles and sides.
A polygon with ten angles and sides.
For a list of n-gon names, go to and scroll to the bottom of the page.
Polygons are also classified as convex or concave. A convex polygon has interior angles less than 180 degrees, thus all triangles are convex. If a polygon has at least one internal angle greater than 180 degrees, then it is concave. An easy way to tell if a polygon is concave is if one side can be extended and crosses the interior of the polygon. Concave polygons can be divided into several convex polygons by drawing diagonals. Regular polygons are polygons in which all sides and angles are congruent.
A triangle is a type of polygon having three sides and, therefore, three angles. The triangle is a closed figure formed from three straight line segments joined at their ends. The points at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. Triangles are always convex polygons.
A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality.
Certain types of triangles
Categorized by angle
The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle.
Categorized by sides
If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle.
If two of the sides of a triangle are of equal length, then it is called an isoceles triangle. In an isoceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o.
If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equilangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent.
Opposite corners and sides in triangles
If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side.
Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner.
The sides or their lengths of a triangle are typically labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa.
Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see Right Triangles and Pythagorean Theorem.
Area of Triangles
If base and height of a triangle are known, then the area of the triangle can be calculated by the formula:
( is the symbol for area)
Ways of calculating the area inside of a triangle are further discussed under Area.
The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle.
The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle.
The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle.
The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle.
Please note that the centres of an equilateral triangle are always the same point.
Right Triangles and Pythagorean Theorem
Right triangles are triangles in which one of the interior angles is 90o. A 90o angle is called a right angle. Right triangles are sometimes called right-angled triangles. The other two interior angles are complementary, i.e. their sum equals 90o. Right triangles have special properties which make it easier to conceptualize and calculate their parameters in many cases.
The side opposite of the right angle is called the hypotenuse. The sides adjacent to the right angle are the legs. When using the Pythagorean Theorem, the hypotenuse or its length is often labeled with a lower case c. The legs (or their lengths) are often labeled a and b.
Either of the legs can be considered a base and the other leg would be considered the height (or altitude), because the right angle automatically makes them perpendicular. If the lengths of both the legs are known, then by setting one of these sides as the base ( b ) and the other as the height ( h ), the area of the right triangle is very easy to calculate using this formula:
This is intuitively logical because another congruent right triangle can be placed against it so that the hypotenuses are the same line segment, forming a rectangle with sides having length b and width h. The area of the rectangle is b × h, so either one of the congruent right triangles forming it has an area equal to half of that rectangle.
Right triangles can be neither equilateral, acute, nor obtuse triangles. Isosceles right triangles have two 45° angles as well as the 90° angle. All isosceles right triangles are similar since corresponding angles in isosceles right triangles are equal. If another triangle can be divided into two right triangles (see Triangle), then the area of the triangle may be able to be determined from the sum of the two constituent right triangles. Also the Pythagorean theorem can be used for non right triangles. a2+b2=c2-2c
For history regarding the Pythagorean Theorem, see Pythagorean theorem. The Pythagorean Theorem states that:
- In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides.
Let's take a right triangle as shown here and set c equal to the length of the hypotenuse and set a and b each equal to the lengths of the other two sides. Then the Pythagorean Theorem can be stated as this equation:
Using the Pythagorean Theorem, if the lengths of any two of the sides of a right triangle are known and it is known which side is the hypotenuse, then the length of the third side can be determined from the formula.
Sine, Cosine, and Tangent for Right Triangles
Sine, Cosine, and Tangent are all functions of an angle, which are useful in right triangle calculations. For an angle designated as θ, the sine function is abbreviated as sin θ, the cosine function is abbreviated as cos θ, and the tangent function is abbreviated as tan θ. For any
angle θ, sin θ, cos θ, and tan θ are each single determined values and if θ is a known value, sin θ, cos θ, and tan θ can be looked up in a table or found with a calculator. There is a table listing these function values at the end of this section. For an angle between listed values, the sine, cosine, or tangent of that angle can be estimated from the values in the table. Conversely, if a number is known to be the sine, cosine, or tangent of a angle, then such tables could be used in reverse to find (or estimate) the value of a corresponding angle.
These three functions are related to right triangles in the following ways:
In a right triangle,
- the sine of a non-right angle equals the length of the leg opposite that angle divided by the length of the hypotenuse.
- the cosine of a non-right angle equals the length of the leg adjacent to it divided by the length of the hypotenuse.
- the tangent of a non-right angle equals the length of the leg opposite that angle divided by the length of the leg adjacent to it.
For any value of θ where cos θ ≠ 0,
If one considers the diagram representing a right triangle with the two non-right angles θ1and θ2, and the side lengths a,b,c as shown here:
For the functions of angle θ1:
Analogously, for the functions of angle θ2:
Table of sine, cosine, and tangent for angles θ from 0 to 90°
|θ in degrees||θ in radians||sin θ||cos θ||tan θ|
General rules for important angles:
Polyominoes are shapes made from connecting unit squares together, though certain connections are not allowed.
A domino is the shape made from attaching unit squares so that they share one full edge. The term polyomino is based on the word domino. There is only one possible domino.
Tromino↑Jump back a section
A polymino made from four squares is called a tetromino. There are five possible combinations and two reflections:
A polymino made from five squares is called a pentomino. There are twelve possible pentominoes, excluding mirror images and rotations.
Ellipses are sometimes called ovals. Ellipses contain two foci. The sum of the distance from a point on the ellipse to one focus and that same point to the other focus is constant
Area Shapes Extended into 3rd Dimension
Geometry/Area Shapes Extended into 3rd Dimension
Area Shapes Extended into 3rd Dimension Linearly to a Line or Point
Geometry/Area Shapes Extended into 3rd Dimension Linearly to a Line or Point
Ellipsoids and Spheres
Geometry/Ellipsoids and Spheres
Suppose you are an astronomer in America. You observe an exciting event (say, a supernova) in the sky and would like to tell your colleagues in Europe about it. Suppose the supernova appeared at your zenith. You can't tell astronomers in Europe to look at their zenith because their zenith points in a different direction. You might tell them which constellation to look in. This might not work, though, because it might be too hard to find the supernova by searching an entire constellation. The best solution would be to give them an exact position by using a coordinate system.
On Earth, you can specify a location using latitude and longitude. This system works by measuring the angles separating the location from two great circles on Earth (namely, the equator and the prime meridian). Coordinate systems in the sky work in the same way.
The equatorial coordinate system is the most commonly used. The equatorial system defines two coordinates: right ascension and declination, based on the axis of the Earth's rotation. The declination is the angle of an object north or south of the celestial equator. Declination on the celestial sphere corresponds to latitude on the Earth. The right ascension of an object is defined by the position of a point on the celestial sphere called the vernal equinox. The further an object is east of the vernal equinox, the greater its right ascension.
A coordinate system is a system designed to establish positions with respect to given reference points. The coordinate system consists of one or more reference points, the styles of measurement (linear measurement or angular measurement) from those reference points, and the directions (or axes) in which those measurements will be taken. In astronomy, various coordinate systems are used to precisely define the locations of astronomical objects.
Latitude and longitude are used to locate a certain position on the Earth's surface. The lines of latitude (horizontal) and the lines of longitude (vertical) make up an invisible grid over the earth. Lines of latitude are called parallels. Lines of longitude aren't completely straight (they run from the exact point of the north pole to the exact point of the south pole) so they are called meridians. 0 degrees latitude is the Earth's middle, called the equator. O degrees longitude was tricky because there really is no middle of the earth vertically. It was finally agreed that the observatory in Greenwich, U.K. would be 0 degrees longitude due to its significant roll in scientific discoveries and creating latitude and longitude. 0 degrees longitude is called the prime meridian.
Latitude and longitude are measured in degrees. One degree is about 69 miles. There are sixty minutes (') in a degree and sixty seconds (") in a minute. These tiny units make GPS's (Global Positioning Systems) much more exact.
There are a few main lines of latitude:the Arctic Circle, the Antarctic Circle, the Tropic of Cancer, and the Tropic of Capricorn. The Antarctic Circle is 66.5 degrees south of the equator and it marks the temperate zone from the Antarctic zone. The Arctic Circle is an exact mirror in the north. The Tropic of Cancer separates the tropics from the temperate zone. It is 23.5 degrees north of the equator. It is mirrored in the south by the Tropic of Capricorn.
Horizontal coordinate system
One of the simplest ways of placing a star on the night sky is the coordinate system based on altitude or azimuth, thus called the Alt-Az or horizontal coordinate system. The reference circles for this system are the horizon and the celestial meridian, both of which may be most easily graphed for a given location using the celestial sphere.
In simplest terms, the altitude is the angle made from the position of the celestial object (e.g. star) to the point nearest it on the horizon. The azimuth is the angle from the northernmost point of the horizon (which is also its intersection with the celestial meridian) to the point on the horizon nearest the celestial object. Usually azimuth is measured eastwards from due north. So east has az=90°, south has az=180°, west has az=270° and north has az=360° (or 0°). An object's altitude and azimuth change as the earth rotates.
Equatorial coordinate system
The equatorial coordinate system is another system that uses two angles to place an object on the sky: right ascension and declination.
Ecliptic coordinate system
The ecliptic coordinate system is based on the ecliptic plane, i.e., the plane which contains our Sun and Earth's average orbit around it, which is tilted at 23°26' from the plane of Earth's equator. The great circle at which this plane intersects the celestial sphere is the ecliptic, and one of the coordinates used in the ecliptic coordinate system, the ecliptic latitude, describes how far an object is to ecliptic north or to ecliptic south of this circle. On this circle lies the point of the vernal equinox (also called the first point of Aries); ecliptic longitude is measured as the angle of an object relative to this point to ecliptic east. Ecliptic latitude is generally indicated by φ, whereas ecliptic longitude is usually indicated by λ.
Galactic coordinate system
As a member of the Milky Way Galaxy, we have a clear view of the Milky Way from Earth. Since we are inside the Milky Way, we don't see the galaxy's spiral arms, central bulge and so forth directly as we do for other galaxies. Instead, the Milky Way completely encircles us. We see the Milky Way as a band of faint starlight forming a ring around us on the celestial sphere. The disk of the galaxy forms this ring, and the bulge forms a bright patch in the ring. You can easily see the Milky Way's faint band from a dark, rural location.
Our galaxy defines another useful coordinate system — the galactic coordinate system. This system works just like the others we've discussed. It also uses two coordinates to specify the position of an object on the celestial sphere. The galactic coordinate system first defines a galactic latitude, the angle an object makes with the galactic equator. The galactic equator has been selected to run through the center of the Milky Way's band. The second coordinate is galactic longitude, which is the angular separation of the object from the galaxy's "prime meridian," the great circle that passes through the Galactic center and the galactic poles. The galactic coordinate system is useful for describing an object's position with respect to the galaxy's center. For example, if an object has high galactic latitude, you might expect it to be less obstructed by interstellar dust.
Transformations between coordinate systems
One can use the principles of spherical trigonometry as applied to triangles on the celestial sphere to derive formulas for transforming coordinates in one system to those in another. These formulas generally rely on the spherical law of cosines, known also as the cosine rule for sides. By substituting various angles on the celestial sphere for the angles in the law of cosines and by thereafter applying basic trigonometric identities, most of the formulas necessary for coordinate transformations can be found. The law of cosines is stated thus:
To transform from horizontal to equatorial coordinates, the relevant formulas are as follows:
where RA is the right ascension, Dec is the declination, LST is the local sidereal time, Alt is the altitude, Az is the azimuth, and Lat is the observer's latitude. Using the same symbols and formulas, one can also derive formulas to transform from equatorial to horizontal coordinates:
Transformation from equatorial to ecliptic coordinate systems can similarly be accomplished using the following formulas:
where RA is the right ascension, Dec is the declination, φ is the ecliptic latitude, λ is the ecliptic longitude, and ε is the tilt of Earth's axis relative to the ecliptic plane. Again, using the same formulas and symbols, new formulas for transforming ecliptic to equatorial coordinate systems can be found:
- Traditional Geometry:
A topological space is a set X, and a collection of subsets of X, C such that both the empty set and X are contained in C and the union of any subcollection of sets in C and the intersection of any finite subcollection of sets in C are also contained within C. The sets in C are called open sets. Their complements relative to X are called closed sets.
Given two topological spaces, X and Y, a map f from X to Y is continuous if for every open set U of Y, f−1(U) is an open set of X.
Hyperbolic and Elliptic Geometry
There are precisely three different classes of three-dimensional constant-curvature geometry: Euclidean, hyperbolic and elliptic geometry. The three geometries are all built on the same first four axioms, but each has a unique version of the fifth axiom, also known as the parallel postulate. The 1868 Essay on an Interpretation of Non-Euclidean Geometry by Eugenio Beltrami (1835 - 1900) proved the logical consistency of the two Non-Euclidean geometries, hyperbolic and elliptic.
The Parallel Postulate
The parallel postulate is as follows for the corresponding geometries.
Euclidean geometry: Playfair's version: "Given a line l and a point P not on l, there exists a unique line m through P that is parallel to l." Euclid's version: "Suppose that a line l meets two other lines m and n so that the sum of the interior angles on one side of l is less than 180°. Then m and n intersect in a point on that side of l." These two versions are equivalent; though Playfair's may be easier to conceive, Euclid's is often useful for proofs.
Hyperbolic geometry: Given an arbitrary infinite line l and any point P not on l, there exist two or more distinct lines which pass through P and are parallel to l.
Elliptic geometry: Given an arbitrary infinite line l and any point P not on l, there does not exist a line which passes through P and is parallel to l.
Hyperbolic geometry is also known as saddle geometry or Lobachevskian geometry. It differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. Some of these remarkable consequences of this geometry's unique fifth postulate include:
1. The sum of the three interior angles in a triangle is strictly less than 180°. Moreover, the angle sums of two distinct triangles are not necessarily the same.
2. Two triangles with the same interior angles have the same area.
Models of Hyperbolic Space
The following are four of the most common models used to describe hyperbolic space.
1. The Poincaré Disc Model. Also known as the conformal disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by arcs of circles that are orthogonal to the boundary circle and by diameters of the boundary circle. Preserves hyperbolic angles.
2. The Klein Model. Also known as the Beltrami-Klein model or projective disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by chords of the circle. This model gives a misleading visual representation of the magnitude of angles.
3. The Poincaré Half-Plane Model. The hyperbolic plane is represented by one-half of the Euclidean plane, as defined by a given Euclidean line l, where l is not considered part of the hyperbolic space. Lines are represented by half-circles orthogonal to l or rays perpendicular to l. Preserves hyperbolic angles.
4. The Lorentz Model. Spheres in Lorentzian four-space. The hyperbolic plane is represented by a two-dimensional hyperboloid of revolution embedded in three-dimensional Minkowski space.
Based on this geometry's definition of the fifth axiom, what does parallel mean? The following definitions are made for this geometry. If a line l and a line m do not intersect in the hyperbolic plane, but intersect at the plane's boundary of infinity, then l and m are said to be parallel. If a line p and a line q neither intersect in the hyperbolic plane nor at the boundary at infinity, then p and q are said to be ultraparallel.
The Ultraparallel Theorem
For any two lines m and n in the hyperbolic plane such that m and n are ultraparallel, there exists a unique line l that is perpendicular to both m and n.
Elliptic geometry differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. For example, directly from this geometry's fifth axiom we have that there exist no parallel lines. Some of the other remarkable consequences of the parallel postulate include: The sum of the three interior angles in a triangle is strictly greater than 180°.
Models of Elliptic Space
Spherical geometry gives us perhaps the simplest model of elliptic geometry. Points are represented by points on the sphere. Lines are represented by circles through the points.
- Euclid's First Four Postulates
- Euclid's Fifth Postulate
- Incidence Geometry
- Projective and Affine Planes (necessary?)
- Axioms of Betweenness
- Pasch and Crossbar
- Axioms of Congruence
- Continuity (necessary?)
- Hilbert Planes
- Neutral Geometry
If you would like to request anything in this topic please post it below.
- Modern geometry
- An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle =
Geometry/An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle | http://en.m.wikibooks.org/wiki/Geometry/Print_version | 13 |