score
int64
15
783
text
stringlengths
897
602k
url
stringlengths
16
295
year
int64
13
24
18
Here's a very simple GNU Make function: it takes three arguments and makes a 'date' out of them by inserting / between the first and second and second and third arguments: make_date = $1/$2/$3 The first thing to notice is that make_date is defined just like any other GNU Make macro (you must use = and not := for reasons we'll see below). To use make_date we $(call) it like this: today = $(call make_date,19,12,2007) That will result in today containing 19/12/2007. The macro uses special macros $1, $2, and $3. These macros contain the argument specified in the $(call). $1 is the first argument, $2 the second and so on. There's no maximum number of arguments, but if you go above 10 then you need parens: you can't write $10 instead of $(10). There's also no minimum number. Arguments that are missing are just undefined and will typically be treated as an empty string. The special argument $0 contains the name of the function. In the example above $0 is make_date. Since functions are just macros with some special automatic macros filled in (if you use the $(origin) function on any of the argument macros ($1 etc.) you'll find that they are classed as automatic just like $@), you can use GNU Make built in functions to build up complex functions. Here's a function that turns every / into a \ in a path" unix_to_dos = $(subst /,\,$1) using the $(subst). Don't be worried about the use of / and \ there. GNU Make does very little escaping and a literal \ is most of the time just a \. Some argument handling gotchas When GNU Make is processing a $(call) it starts by splitting the argument list on commas to set $1 etc. The arguments are expanded so that $1 etc. are completely expanded before they are ever referenced (it's as if GNU Make used := to set them). This means that if an argument has a side-effect (such as calling $(shell)) then that side-effect will always occur as soon as the $(call) is executed, even if the argument was never actually used by the function. One common problem is that if an argument contains a comma the splitting of arguments can go wrong. For example, here's a simple function that swaps its two arguments: swap = $2 $1 If you do $(call swap,first,argument,second) GNU Make doesn't have any way to know that the first argument was meant to be first,argument and swap ends up returning argument first instead of second first,argument. There are two ways around this. You could simply hide the first argument inside a macro. Since GNU Make doesn't expand the arguments until after splitting a comma inside a macro will not cause any confusion: FIRST := first,argument SWAPPED := $(call swap,$(FIRST),second) The other way to do this is to create a simple macro that just contains a comma and use that instead: c := , SWAPPED := $(call swap,first$cargument,second) Or even call that macro , and use it (with parens): , := , SWAPPED := $(call swap,first$(,)argument,second) Calling built-in functions It's possible to use the $(call) syntax with built in GNU Make functions. For example, you could call $(warning) like this: This is useful because it means that you can pass any function name as an argument to a user-defined function and $(call) it without needing to know if it's built-in or not. This gives you the ability to created functions that act on functions. The classic functional programming map function (which applies a function to every member of a list returning the resulting list) can be created
http://www.agileconnection.com/article/gnu-make-user-defined-functions
13
17
Fallacies are defects in an argument other than false premises which cause an argument to be invalid, unsound or weak. Fallacies can be separated into two general groups: formal and informal. A formal fallacy is a defect which can be identified merely be looking at the logical structure of an argument rather than any specific statements. Formal fallacies are only found only in deductive arguments with identifiable forms. One of the things which makes them appear reasonable is the fact that they look like and mimic valid logical arguments, but are in fact invalid. Here is an example: 1. All humans are mammals. (premise) 2. All cats are mammals. (premise) 3. All humans are cats. (conclusion) Both premises in this argument are true, but the conclusion is false. The defect is a formal fallacy, and can be demonstrated by reducing the argument to its bare structure: 1. All A are C 2. All B are C 3. All A are B It does not really matter what A, B and C stand for we could replace them with wines, milk and beverages. The argument would still be invalid and for the exact same reason. Sometimes, therefore, it is helpful to reduce an argument to its structure and ignore content in order to see if it is valid. Informal fallacies are defects which can be identified only through an analysis of the actual content of the argument rather than through its structure. Here is an example: 1. Geological events produce rock. (premise) 2. Rock is a type of music. (premise) 3. Geological events produce music. (conclusion) The premises in this argument are true, but clearly the conclusion is false. Is the defect a formal fallacy or an informal fallacy? To see if this is actually a formal fallacy, we have to break it down to its basic structure: 1. A = B 2. B = C 3. A = C As we can see, this structure is valid, therefore the defect cannot be a formal fallacy identifiable from the structure. Therefore, the defect must be an informal fallacy identifiable from the content. In fact, when we examine the content, we find that a key term, rock, is being used with two different definitions (the technical term for this sort of fallacy is Equivocation). Informal fallacies can work in several ways. Some distract the reader from what is really going on. Some, like in the above example, make use of vagueness or ambiguity to cause confusion. Some appeal to emotions rather than logic and reason. Categorizing fallacies can be done in a number of different methods. Aristotle was the first to try and systematically describe and categorize fallacies, identifying thirteen fallacies divided into two groups. Since then many more have been described and the categorization is more complicated. Thus, while the categorization used here should prove.
http://atheism.about.com/od/logicalarguments/a/fallacy.htm
13
24
|This is the print version of Geometry You won't see this message or any elements not part of the book's content when you print or preview this page. Part I- Euclidean Geometry Chapter 1: Points, Lines, Line Segments and Rays Points and lines are two of the most fundamental concepts in Geometry, but they are also the most difficult to define. We can describe intuitively their characteristics, but there is no set definition for them: they, along with the plane, are the undefined terms of geometry. All other geometric definitions and concepts are built on the undefined ideas of the point, line and plane. Nevertheless, we shall try to define them. A point is an exact location in space. Points are dimensionless. That is, a point has no width, length, or height. We locate points relative to some arbitrary standard point, often called the "origin". Many physical objects suggest the idea of a point. Examples include the tip of a pencil, the corner of a cube, or a dot on a sheet of paper. As for a line segment, we specify a line with two points. Starting with the corresponding line segment, we find other line segments that share at least two points with the original line segment. In this way we extend the original line segment indefinitely. The set of all possible line segments findable in this way constitutes a line. A line extends indefinitely in a single dimension. Its length, having no limit, is infinite. Like the line segments that constitute it, it has no width or height. You may specify a line by specifying any two points within the line. For any two points, only one line passes through both points. On the other hand, an unlimited number of lines pass through any single point. We construct a ray similarly to the way we constructed a line, but we extend the line segment beyond only one of the original two points. A ray extends indefinitely in one direction, but ends at a single point in the other direction. That point is called the end-point of the ray. Note that a line segment has two end-points, a ray one, and a line none. A point exists in zero dimensions. A line exists in one dimension, and we specify a line with two points. A plane exists in two dimensions. We specify a plane with three points. Any two of the points specify a line. All possible lines that pass through the third point and any point in the line make up a plane. In more obvious language, a plane is a flat surface that extends indefinitely in its two dimensions, length and width. A plane has no height. Space exists in three dimensions. Space is made up of all possible planes, lines, and points. It extends indefinitely in all directions. Mathematics can extend space beyond the three dimensions of length, width, and height. We then refer to "normal" space as 3-dimensional space. A 4-dimensional space consists of an infinite number of 3-dimensional spaces. Etc. [How we label and reference points, lines, and planes.] Chapter 2: Angles An angle is the union of two rays with a common endpoint, called the vertex. The angles formed by vertical and horizontal lines are called right angles; lines, segments, or rays that intersect in right angles are said to be perpendicular. Angles, for our purposes, can be measured in either degrees (from 0 to 360) or radians (from 0 to ). Angles length can be determined by measuring along the arc they map out on a circle. In radians we consider the length of the arc of the circle mapped out by the angle. Since the circumference of a circle is , a right angle is radians. In degrees, the circle is 360 degrees, and so a right angle would be 90 degrees. Angles are named in several ways. - By naming the vertex of the angle (only if there is only one angle formed at that vertex; the name must be non-ambiguous) - By naming a point on each side of the angle with the vertex in between. - By placing a small number on the interior of the angle near the vertex. Classification of Angles by Degree Measure - an angle is said to be acute if it measures between 0 and 90 degrees, exclusive. - an angle is said to be right if it measures 90 degrees. - notice the small box placed in the corner of a right angle, unless the box is present it is not assumed the angle is 90 degrees. - all right angles are congruent - an angle is said to be obtuse if it measures between 90 and 180 degrees, exclusive. Special Pairs of Angles - adjacent angles - adjacent angles are angles with a common vertex and a common side. - adjacent angles have no interior points in common. - complementary angles - complementary angles are two angles whose sum is 90 degrees. - complementary angles may or may not be adjacent. - if two complementary angles are adjacent, then their exterior sides are perpendicular. - supplementary angles - two angles are said to be supplementary if their sum is 180 degrees. - supplementary angles need not be adjacent. - if supplementary angles are adjacent, then the sides they do not share form a line. - linear pair - if a pair of angles is both adjacent and supplementary, they are said to form a linear pair. - vertical angles - angles with a common vertex whose sides form opposite rays are called vertical angles. - vertical angles are congruent. Side-Side-Side (SSS) (Postulate 12) If three sides of one triangle are congruent to three sides of a second triangle, then the two triangles are congruent. Side-Angle-Side (SAS) (Postulate 13) If two sides and the included angle of a second triangle, then the two triangles are congruent. If two angles and the included side of one triangle are congruent to two angles and the included side of a second triangle, then two triangles are congruent. If two angles and a non-included side of one triangle are congruent to two angles and the corresponding non-included side of a second triangle, then the two triangles are congruent. NO - Angle-Side-Side (ASS) The "ASS" postulate does not work, unlike the other ones. A way that students can remember this is that "ass" is not a nice word, so we don't use it in geometry (since it does not work). There are two approaches to furthering knowledge: reasoning from known ideas and synthesizing observations. In inductive reasoning you observe the world, and attempt to explain based on your observations. You start with no prior assumptions. Deductive reasoning consists of logical assertions from known facts. What you need to know Before one can start to understand logic, and thereby begin to prove geometric theorems, one must first know a few vocabulary words and symbols. Conditional: a conditional is something which states that one statement implies another. A conditional contains two parts: the condition and the conclusion, where the former implies the latter. A conditional is always in the form "If statement 1, then statement 2." In most mathematical notation, a conditional is often written in the form p ⇒ q, which is read as "If p, then q" where p and q are statements. Converse: the converse of a logical statement is when the conclusion becomes the condition and vice versa; i.e., p ⇒ q becomes q ⇒ p. For example, the converse of the statement "If someone is a woman, then they are a human" would be "If someone is a human, then they are a woman." The converse of a conditional does not necessarily have the same truth value as the original, though it sometimes does, as will become apparent later. AND: And is a logical operator which is true only when both statements are true. For example, the statement "Diamond is the hardest substance known to man AND a diamond is a metal" is false. While the former statement is true, the latter is not. However, the statement "Diamond is the hardest substance known to man AND diamonds are made of carbon" would be true, because both parts are true. OR: If two statements are joined together by "or," then the truth of the "or" statement is dependant upon whether one or both of the statements from which it is composed is true. For example, the statement "Tuesday is the day after Monday OR Thursday is the day after Saturday" would have a truth value of "true," because even though the latter statement is false, the former is true. NOT: If a statement is preceded by "NOT," then it is evaluating the opposite truth value of that statement. The symbol for "NOT" is For example, if the statement p is "Elvis is dead," then ¬p would be "Elvis is not dead." The concept of "NOT" can cause some confusion when it relates to statements which contain the word "all." For example, if r is "¬". "All men have hair," then ¬r would be "All men do not have hair" or "No men have hair." Do not confuse this with "Not all men have hair" or "Some men have hair." The "NOT" should apply to the verb in the statement: in this case, "have." ¬p can also be written as NOT p or ~p. NOT p may also be referred to as the "negation of p." Inverse: The inverse of a conditional says that the negation of the condition implies the negation of the conclusion. For example, the inverse of p ⇒ q is ¬p ⇒ ¬q. Like a converse, an inverse does not necessarily have the same truth value as the original conditional. Biconditional: A biconditional is conditional where the condition and the conclusion imply one another. A biconditional starts with the words "if and only if." For example, "If and only if p, then q" means both that p implies q and that q implies p. Premise: A premise is a statement whose truth value is known initially. For example, if one were to say "If today is Thursday, then the cafeteria will serve burritos," and one knew that what day it was, then the premise would be "Today is Thursday" or "Today is not Thursday." ⇒: The symbol which denotes a conditional. p ⇒ q is read as "if p, then q." Iff: Iff is a shortened form of "if and only if." It is read as "if and only if." ⇔: The symbol which denotes a biconditonal. p ⇔ q is read as "If and only if p, then q." ∴: The symbol for "therefore." p ∴ q means that one knows that p is true (p is true is the premise), and has logically concluded that q must also be true. ∧: The symbol for "and." ∨: The symbol for "or." There are a few forms of deductive logic. One of the most common deductive logical arguments is modus ponens, which states that: - p ⇒ q - p ∴ q - (If p, then q) - (p, therefore q) An example of modus ponens: - If I stub my toe, then I will be in pain. - I stub my toe. - Therefore, I am in pain. Another form of deductive logic is modus tollens, which states the following. - p ⇒ q - ¬q ∴ ¬p - (If p, then q) - (not q, therefore not p) Modus tollens is just as valid a form of logic as modus ponens. The following is an example which uses modus tollens. - If today is Thursday, then the cafeteria will be serving burritos. - The cafeteria is not serving burritos, therefore today is not Thursday. Another form of deductive logic is known as the If-Then Transitive Property. Simply put, it means that there can be chains of logic where one thing implies another thing. The If-Then Transitive Property states: - p ⇒ q - (q ⇒ r) ∴ (p ⇒ r) - (If p, then q) - ((If q, then r), therefore (if p, then r)) For example, consider the following chain of if-then statements. - If today is Thursday, then the cafeteria will be serving burritos. - If the cafeteria will be serving burritos, then I will be happy. - Therefore, if today is Thursday, then I will be happy. Inductive reasoning is a logical argument which does not definitely prove a statement, but rather assumes it. Inductive reasoning is used often in life. Polling is an example of the use of inductive reasoning. If one were to poll one thousand people, and 300 of those people selected choice A, then one would infer that 30% of any population might also select choice A. This would be using inductive logic, because it does not definitively prove that 30% of any population would select choice A. Because of this factor of uncertainty, inductive reasoning should be avoided when possible when attempting to prove geometric properties. Truth tables are a way that one can display all the possibilities that a logical system may have when given certain premises. The following is a truth table with two premises (p and q), which shows the truth value of some basic logical statements. (NOTE: T = true; F = false) |p||q||¬p||¬q||p ⇒ q||p ⇔ q||p ∧ q||p ∨ q| Unlike science which has theories, mathematics has a definite notion of proof. Mathematics applies deductive reasoning to create a series of logical statements which show that one thing implies another. Consider a triangle, which we define as a shape with three vertices joined by three lines. We know that we can arbitrarily pick some point on a page, and make that into a vertex. We repeat that process and pick a second point. Using a ruler, we can connect these two points. We now make a third point, and using the ruler connect it to each of the other points. We have constructed a triangle. In mathematics we formalize this process into axioms, and carefully lay out the sequence of statements to show what follows. All definitions are clearly defined. In modern mathematics, we are always working within some system where various axioms hold. The most common form of explicit proof in highschool geometry is a two column proof consists of five parts: the given, the proposition, the statement column, the reason column, and the diagram (if one is given). Example of a Two-Column Proof Now, suppose a problem tells you to solve for , showing all steps made to get to the answer. A proof shows how this is done: Prove: x = 1 |Property of subtraction| We use "Given" as the first reason, because it is "given" to us in the problem. Written proofs (also known as informal proofs, paragraph proofs, or 'plans for proof') are written in paragraph form. Other than this formatting difference, they are similar to two-column proofs. Sometimes it is helpful to start with a written proof, before formalizing the proof in two-column form. If you're having trouble putting your proof into two column form, try "talking it out" in a written proof first. Example of a Written Proof We are given that x + 1 = 2, so if we subtract one from each side of the equation (x + 1 - 1 = 2 - 1), then we can see that x = 1 by the definition of subtraction. A flowchart proof or more simply a flow proof is a graphical representation of a two-column proof. Each set of statement and reasons are recorded in a box and then arrows are drawn from one step to another. This method shows how different ideas come together to formulate the proof. Postulates in geometry are very similar to axioms, self-evident truths, and beliefs in logic, political philosophy and personal decision-making. The five postulates of Euclidean Geometry define the basic rules governing the creation and extension of geometric figures with ruler and compass. Together with the five axioms (or "common notions") and twenty-three definitions at the beginning of Euclid's Elements, they form the basis for the extensive proofs given in this masterful compilation of ancient Greek geometric knowledge. They are as follows: - A straight line may be drawn from any given point to any other. - A straight line may be extended to any finite length. - A circle may be described with any given point as its center and any distance as its radius. - All right angles are congruent. - If a straight line intersects two other straight lines, and so makes the two interior angles on one side of it together less than two right angles, then the other straight lines will meet at a point if extended far enough on the side on which the angles are less than two right angles. Postulate 5, the so-called Parallel Postulate was the source of much annoyance, probably even to Euclid, for being so relatively prolix. Mathematicians have a peculiar sense of aesthetics that values simplicity arising from simplicity, with the long complicated proofs, equations and calculations needed for rigorous certainty done behind the scenes, and to have such a long sentence amidst such other straightforward, intuitive statements seems awkward. As a result, many mathematicians over the centuries have tried to prove the results of the Elements without using the Parallel Postulate, but to no avail. However, in the past two centuries, assorted non-Euclidean geometries have been derived based on using the first four Euclidean postulates together with various negations of the fifth. Chapter 7. Vertical Angles Vertical angles are a pair of angles with a common vertex whose sides form opposite rays. An extensively useful fact about vertical angles is that they are congruent. Aside from saying that any pair of vertical angles "obviously" have the same measure by inspection, we can prove this fact with some simple algebra and an observation about supplementary angles. Let two lines intersect at a point, and angles A1 and A2 be a pair of vertical angles thus formed. At the point of intersection, two other angles are also formed, and we'll call either one of them B1 without loss of generality. Since B1 and A1 are supplementary, we can say that the measure of B1 plus the measure of A1 is 180. Similarly, the measure of B1 plus the measure of A2 is 180. Thus the measure of A1 plus the measure of B1 equals the measure of A2 plus the measure of B1, by substitution. Then by subracting the measure of B1 from each side of this equality, we have that the measure of A1 equals the measure of A2. Parallel Lines in a Plane Two coplanar lines are said to be parallel if they never intersect. For any given point on the first line, its distance to the second line is equal to the distance between any other point on the first line and the second line. The common notation for parallel lines is "||" (a double pipe); it is not unusual to see "//" as well. If line m is parallel to line n, we write "m || n". Lines in a plane either coincide, intersect in a point, or are parallel. Controversies surrounding the Parallel Postulate lead to the development of non-Euclidean geometries. Parallel Lines and Special Pairs of Angles When two (or more) parallel lines are cut by a transversal, the following angle relationships hold: - corresponding angles are congruent - alternate exterior angles are congruent - same-side interior angles are supplementary Theorems Involving Parallel Lines - If a line in a plane is perpendicular to one of two parallel lines, it is perpendicular to the other line as well. - If a line in a plane is parallel to one of two parallel lines, it is parallel to both parallel lines. - If three or more parallel lines are intersected by two or more transversals, then they divide the transversals proportionally. Congruent shapes are the same size with corresponding lengths and angles equal. In other words, they are exactly the same size and shape. They will fit on top of each other perfectly. Therefore if you know the size and shape of one you know the size and shape of the others. For example: Each of the above shapes is congruent to each other. The only difference is in their orientation, or the way they are rotated. If you traced them onto paper and cut them out, you could see that they fit over each other exactly. Having done this, right away we can see that, though the angles correspond in size and position, the sides do not. Therefore it is proved the triangles are not congruent. Similar shapes are like congruent shapes in that they must be the same shape, but they don't have to be the same size. Their corresponding angles are congruent and their corresponding sides are in proportion. Methods of Determining Congruence Two triangles are congruent if: - each pair of corresponding sides is congruent - two pairs of corresponding angles are congruent and a pair of corresponding sides are congruent - two pairs of corresponding sides and the angles included between them are congruent Tips for Proofs Commonly used prerequisite knowledge in determining the congruence of two triangles includes: - by the reflexive property, a segment is congruent to itself - vertical angles are congruent - when parallel lines are cut by a transversal corresponding angles are congruent - when parallel lines are cut by a transversal alternate interior angles are congruent - midpoints and bisectors divide segments and angles into two congruent parts For two triangles to be similar, all 3 corresponding angles must be congruent, and all three sides must be proportionally equal. Two triangles are similar if... - Two angles of each triangle are congruent. - The acute angle of a right triangle is congruent to the acute angle of another right triangle. - The two triangles are congruent. Note here that congruency implies similarity. A quadrilateral is a polygon that has four sides. Special Types of Quadrilaterals - A parallelogram is a quadrilateral having two pairs of parallel sides. - A square, a rhombus, and a rectangle are all examples of parallelograms. - A rhombus is a quadrilateral of which all four sides are the same length. - A rectangle is a parallelogram of which all four angles are 90 degrees. - A square is a quadrilateral of which all four sides are of the same length, and all four angles are 90 degrees. - A square is a rectangle, a rhombus, and a parallelogram. - A trapezoid is a quadrilateral which has two parallel sides (U.S.) - U.S. usage: A trapezium is a quadrilateral which has no parallel sides. - U.K usage: A trapezium is a quadrilateral with two parallel sides (same as US trapezoid definition). - A kite is an quadrilateral with two pairs of congruent adjacent sides. One of the most important properties used in proofs is that the sum of the angles of the quadrilateral is always 360 degrees. This can easily be proven too: If you draw a random quadrilateral, and one of its diagonals, you'll split it up into two triangles. Given that the sum of the angles of a triangle is 180 degrees, you can sum them up, and it'll give 360 degrees. A parallelogram is a geometric figure with two pairs of parallel sides. Parallelograms are a special type of quadrilateral. The opposite sides are equal in length and the opposite angles are also equal. The area is equal to the product of any side and the distance between that side and the line containing the opposite side. Properties of Parallelograms The following properties are common to all parallelograms (parallelogram, rhombus, rectangle, square) - both pairs of opposite sides are parallel - both pairs of opposite sides are congruent - both pairs of opposite angles are congruent - the diagonals bisect each other - A rhombus is a parallelogram with four congruent sides. - The diagonals of a rhombus are perpendicular. - Each diagonal of a rhombus bisects two angles the rhombus. - A rhombus may or may not be a square. - A square is a parallelogram with four right angles and four congruent sides. - A square is both a rectangle and a rhombus and inherits all of their properties. A Trapezoid (American English) or Trapezium (British English) is a quadrilateral that has two parallel sides and two non parallel sides. Some properties of trapezoids: - The interior angles sum to 360° as in any quadrilateral. - The parallel sides are unequal. - Each of the parallel sides is called a base (b) of the trapezoid. The two angles that join one base are called 'base angles'. - If the two non-parallel sides are equal, the trapezoid is called an isosceles trapezoid. - In an isosceles trapezoid, each pair of base angles are equal. - If one pair of base angles of a trapezoid are equal, the trapezoid is isosceles. - A line segment connecting the midpoints of the non-parallel sides is called the median (m) of the trapeziod. - The median of a trapezoid is equal to one half the sum of the bases (called b1 and b2). - A line segment perpendicular to the bases is called an altitude (h) of the trapezoid. The area (A) of a trapezoid is equal to the product of an altitude and the median. Recall though that the median is half of the sum of the bases. Substituting for m, we get: A circle is a set of all points in a plane that are equidistant from a single point; that single point is called the centre of the circle and the distance between any point on circle and the centre is called radius of the circle. a chord is an internal segment of a circle that has both of its endpoints on the circumference of the circle. - the diameter of a circle is the largest chord possible a secant of a circle is any line that intersects a circle in two places. - a secant contains any chord of the circle a tangent to a circle is a line that intersects a circle in exactly one point, called the point of tangency. - at the point of tangency the tangent line and the radius of the circle are perpendicular Chapter 16. Circles/Arcs An arc is a segment of the perimeter of a given circle. The measure of an arc is measured as an angle, this could be in radians or degrees (more on radians later). The exact measure of the arc is determined by the measure of the angle formed when a line is drawn from the center of the circle to each end point. As an example the circle below has an arc cut out of it with a measure of 30 degrees. As I mentioned before an arc can be measured in degrees or radians. A radian is merely a different method for measuring an angle. If we take a unit circle (which has a radius of 1 unit), then if we take an arc with the length equal to 1 unit, and draw line from each endpoint to the center of the circle the angle formed is equal to 1 radian. this concept is displayed below, in this circle an arc has been cut off by an angle of 1 radian, and therefore the length of the arc is equal to because the radius is 1. From this definition we can say that on the unit circle a single radian is equal to radians because the perimeter of a unit circle is equal to . Another useful property of this definition that will be extremely useful to anyone who studies arcs is that the length of an arc is equal to its measure in radians multiplied by the radius of the circle. Converting to and from radians is a fairly simple process. 2 facts are required to do so, first a circle is equal to 360 degrees, and it is also equal to . using these 2 facts we can form the following formula: , thus 1 degree is equal to radians. From here we can simply multiply by the number of degrees to convert to radians. for example if we have 20 degrees and want to convert to radians then we proceed as follows: The same sort of argument can be used to show the formula for getting 1 radian. , thus 1 radian is equal to A tangent is a line in the same plane as a given circle that meets that circle in exactly one point. That point is called the point of tangency. A tangent cannot pass through a circle; if it does, it is classified as a chord. A secant is a line containing a chord. A common tangent is a line tangent to two circles in the same plane. If the tangent does not intersect the line containing and connecting the centers of the circles, it is an external tangent. If it does, it is an internal tangent. Two circles are tangent to one another if in a plane they intersect the same tangent in the same point. Sector of a circle A sector of a circle can be thought of as a pie piece. In the picture below, a sector of the circle is shaded yellow. To find the area of a sector, find the area of the whole circle and then multiply by the angle of the sector over 360 degrees. A more intuitive approach can be used when the sector is half the circle. In this case the area of the sector would just be the area of the circle divided by 2. - See Angle Addition Property of Equality For any real numbers a, b, and c, if a = b, then a + c = b + c. A figure is an angle if and only if it is composed of two rays which share a common endpoint. Each of these rays (or segments, as the case may be) is known as a side of the angle (For example, in the illustration at right), and the common point is known as the angle's vertex (point B in the illustration). Angles are measured by the difference of their slopes. The units for angle measure are radians and degrees. Angles may be classified by their degree measure. - Acute Angle: an angle is an acute angle if and only if it has a measure of less than 90° - Right Angle: an angle is an right angle if and only if it has a measure of exactly 90° - Obtuse Angle: an angle is an obtuse angle if and only if it has a measure of greater than 90° Angle Addition Postulate If P is in the interior of an angle , then Center of a circle Point P is the center of circle C if and only if all points in circle C are equidistant from point P and point P is contained in the same plane as circle C. A collection of points is said to be a circle with a center at point P and a radius of some distance r if and only if it is the collection of all points which are a distance of r away from point P and are contained by a plane which contain point P. A polygon is said to be concave if and only if it contains at least one interior angle with a measure greater than 180° exclusively and less than 360° exclusively. Two angles formed by a transversal intersecting with two lines are corresponding angles if and only if one is on the inside of the two lines, the other is on the outside of the two lines, and both are on the same side of the transversal. Corresponding Angles Postulate If two lines cut by a transversal are parallel, then their corresponding angles are congruent. Corresponding Parts of Congruent Triangles are Congruent Postulate The Corresponding Parts of Congruent Triangles are Congruent Postulate (CPCTC) states: - If ∆ABC ≅ ∆XYZ, then all parts of ∆ABC are congruent to their corresponding parts in ∆XYZ. For example: - ∠ABC ≅ ∠XYZ - ∠BCA ≅ ∠YZX - ∠CAB ≅ ∠ZXY CPCTC also applies to all other parts of the triangles, such as a triangle's altitude, median, circumcenter, et al. A line segment is the diameter of a circle if and only if it is a chord of the circle which contains the circle's center. - See Circle and if they cross they are congruent A collection of points is a line if and only if the collection of points is perfectly straight (aligned), is infinitely long, and is infinitely thin. Between any two points on a line, there exists an infinite number of points which are also contained by the line. Lines are usually written by two points in the line, such as line AB, or A collection of points is a line segment if and only if it is perfectly straight, is infinitely thin, and has a finite length. A line segment is measured by the shortest distance between the two extreme points on the line segment, known as endpoints. Between any two points on a line segment, there exists an infinite number of points which are also contained by the line segment. Two lines or line segments are said to be parallel if and only if the lines are contained by the same plane and have no points in common if continued infinitely. Two planes are said to be parallel if and only if the planes have no points in common when continued infinitely. Two lines that intersect at a 90° angle. Given a line, and a point P not in line , then there is one and only one line that goes through point P perpendicular to An object is a plane if and only if it is a two-dimensional object which has no thickness or curvature and continues infinitely. A plane can be defined by three points. A plane may be considered to be analogous to a piece of paper. A point is a zero-dimensional mathematical object representing a location in one or more dimensions. A point has no size; it has only location. A polygon is a closed plane figure composed of at least 3 straight lines. Each side has to intersect another side at their respective endpoints, and that the lines intersecting are not collinear. The radius of a circle is the distance between any given point on the circle and the circle's center. - See Circle A ray is a straight collection of points which continues infinitely in one direction. The point at which the ray stops is known as the ray's endpoint. Between any two points on a ray, there exists an infinite number of points which are also contained by the ray. The points on a line can be matched one to one with the real numbers. The real number that corresponds to a point is the point's coordinate. The distance between two points is the absolute value of the difference between the two coordinates of the two points. Geometry/Synthetic versus analytic geometry - Two and Three-Dimensional Geometry and Other Geometric Figures Perimeter and Arclength Perimeter of Circle The circles perimeter can be calculated using the following formula where and the radius of the circle. Perimeter of Polygons The perimeter of a polygon with number of sides abbreviated can be caculated using the following formula Arclength of Circles The arclength of a given circle with radius can be calculated using where is the angle given in radians. Arclength of Curves If a curve in have a parameter form for , then the arclength can be calculated using the following fomula Derivation of formula can be found using differential geometry on infinitely small triangles. Area of Circles The method for finding the area of a circle is Where π is a constant roughly equal to 3.14159265358978 and r is the radius of the circle; a line drawn from any point on the circle to its center. Area of Triangles Three ways of calculating the area inside of a triangle are mentioned here. If one of the sides of the triangle is chosen as a base, then a height for the triangle and that particular base can be defined. The height is a line segment perpendicular to the base or the line formed by extending the base and the endpoints of the height are the corner point not on the base and a point on the base or line extending the base. Let B = the length of the side chosen as the base. Let h = the distance between the endpoints of the height segment which is perpendicular to the base. Then the area of the triangle is given by: This method of calculating the area is good if the value of a base and its corresponding height in the triangle is easily determined. This is particularly true if the triangle is a right triangle, and the lengths of the two sides sharing the 90o angle can be determined. - , also known as Heron's Formula If the lengths of all three sides of a triangle are known, Hero's formula may be used to calculate the area of the triangle. First, the semiperimeter, s, must be calculated by dividing the sum of the lengths of all three sides by 2. For a triangle having side lengths a, b, and c : Then the triangle's area is given by: If the triangle is needle shaped, that is, one of the sides is very much shorter than the other two then it can be difficult to compute the area because the precision needed is greater than that available in the calculator or computer that is used. In otherwords Heron's formula is numerically unstable. Another formula that is much more stable is: where , , and have been sorted so that . In a triangle with sides length a, b, and c and angles A, B, and C opposite them, This formula is true because in the formula . It is useful because you don't need to find the height from an angle in a separate step, and is also used to prove the law of sines (divide all terms in the above equation by a*b*c and you'll get it directly!) Area of Rectangles The area calculation of a rectangle is simple and easy to understand. One of the sides is chosen as the base, with a length b. An adjacent side is then the height, with a length h, because in a rectangle the adjacent sides are perpendicular to the side chosen as the base. The rectangle's area is given by: Sometimes, the baselength may be referred to as the length of the rectangle, l, and the height as the width of the rectangle, w. Then the area formula becomes: Regardless of the labels used for the sides, it is apparent that the two formulas are equivalent. Of course, the area of a square with sides having length s would be: Area of Parallelograms The area of a parallelogram can be determined using the equation for the area of a rectangle. The formula is: A is the area of a parallelogram. b is the base. h is the height. The height is a perpendicular line segment that connects one of the vertices to its opposite side (the base). Area of Rhombus Remember in a rombus all sides are equal in length. and represent the diagonals. Area of Trapezoids The area of a trapezoid is derived from taking the arithmetic mean of its two parallel sides to form a rectangle of equal area. Where and are the lengths of the two parallel bases. Area of Kites The area of a kite is based on splitting the kite into four pieces by halving it along each diagonal and using these pieces to form a rectangle of equal area. Where a and b are the diagonals of the kite. Alternatively, the kite may be divided into two halves, each of which is a triangle, by the longer of its diagonals, a. The area of each triangle is thus Where b is the other (shorter) diagonal of the kite. And the total area of the kite (which is composed of two identical such triangles) is Which is the same as Areas of other Quadrilaterals The areas of other quadrilaterals are slightly more complex to calculate, but can still be found if the quadrilateral is well-defined. For example, a quadrilateral can be divided into two triangles, or some combination of triangles and rectangles. The areas of the constituent polygons can be found and added up with arithmetic. Volume is like area expanded out into 3 dimensions. Area deals with only 2 dimensions. For volume we have to consider another dimension. Area can be thought of as how much space some drawing takes up on a flat piece of paper. Volume can be thought of as how much space an object takes up. |Common equations for volume:| |A cube:||s = length of a side| |A rectangular prism:||l = length, w = width, h = height| |A cylinder (circular prism):||r = radius of circular face, h = height| |Any prism that has a constant cross sectional area along the height:||A = area of the base, h = height| |A sphere:||r = radius of sphere which is the integral of the Surface Area of a sphere |An ellipsoid:||a, b, c = semi-axes of ellipsoid| |A pyramid:||A = area of the base, h = height of pyramid| |A cone (circular-based pyramid):||r = radius of circle at base, h = distance from base to tip (The units of volume depend on the units of length - if the lengths are in meters, the volume will be in cubic meters, etc.) The volume of any solid whose cross sectional areas are all the same is equal to that cross sectional area times the distance the centroid(the center of gravity in a physical object) would travel through the solid. If two solids are contained between two parallel planes and every plane parallel to these two plane has equal cross sections through these two solids, then their volumes are equal. A Polygon is a two-dimensional figure, meaning all of the lines in the figure are contained within one plane. They are classified by the number of angles, which is also the number of sides. One key point to note is that a polygon must have at least three sides. Normally, three to ten sided figures are referred to by their names (below), while figures with eleven or more sides is an n-gon, where n is the number of sides. Hence a forty-sided polygon is called a 40-gon. A polygon with three angles and sides. A polygon with four angles and sides. A polygon with five angles and sides. A polygon with six angles and sides. A polygon with seven angles and sides. A polygon with eight angles and sides. A polygon with nine angles and sides. A polygon with ten angles and sides. For a list of n-gon names, go to and scroll to the bottom of the page. Polygons are also classified as convex or concave. A convex polygon has interior angles less than 180 degrees, thus all triangles are convex. If a polygon has at least one internal angle greater than 180 degrees, then it is concave. An easy way to tell if a polygon is concave is if one side can be extended and crosses the interior of the polygon. Concave polygons can be divided into several convex polygons by drawing diagonals. Regular polygons are polygons in which all sides and angles are congruent. A triangle is a type of polygon having three sides and, therefore, three angles. The triangle is a closed figure formed from three straight line segments joined at their ends. The points at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. Triangles are always convex polygons. A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality. Certain types of triangles Categorized by angle The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle. Categorized by sides If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle. If two of the sides of a triangle are of equal length, then it is called an isoceles triangle. In an isoceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o. If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equilangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent. Opposite corners and sides in triangles If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side. Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner. The sides or their lengths of a triangle are typically labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa. Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see Right Triangles and Pythagorean Theorem. Area of Triangles If base and height of a triangle are known, then the area of the triangle can be calculated by the formula: ( is the symbol for area) Ways of calculating the area inside of a triangle are further discussed under Area. The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle. The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle. The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle. The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle. Please note that the centres of an equilateral triangle are always the same point. Right Triangles and Pythagorean Theorem Right triangles are triangles in which one of the interior angles is 90o. A 90o angle is called a right angle. Right triangles are sometimes called right-angled triangles. The other two interior angles are complementary, i.e. their sum equals 90o. Right triangles have special properties which make it easier to conceptualize and calculate their parameters in many cases. The side opposite of the right angle is called the hypotenuse. The sides adjacent to the right angle are the legs. When using the Pythagorean Theorem, the hypotenuse or its length is often labeled with a lower case c. The legs (or their lengths) are often labeled a and b. Either of the legs can be considered a base and the other leg would be considered the height (or altitude), because the right angle automatically makes them perpendicular. If the lengths of both the legs are known, then by setting one of these sides as the base ( b ) and the other as the height ( h ), the area of the right triangle is very easy to calculate using this formula: This is intuitively logical because another congruent right triangle can be placed against it so that the hypotenuses are the same line segment, forming a rectangle with sides having length b and width h. The area of the rectangle is b × h, so either one of the congruent right triangles forming it has an area equal to half of that rectangle. Right triangles can be neither equilateral, acute, nor obtuse triangles. Isosceles right triangles have two 45° angles as well as the 90° angle. All isosceles right triangles are similar since corresponding angles in isosceles right triangles are equal. If another triangle can be divided into two right triangles (see Triangle), then the area of the triangle may be able to be determined from the sum of the two constituent right triangles. Also the Pythagorean theorem can be used for non right triangles. a2+b2=c2-2c For history regarding the Pythagorean Theorem, see Pythagorean theorem. The Pythagorean Theorem states that: - In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides. Let's take a right triangle as shown here and set c equal to the length of the hypotenuse and set a and b each equal to the lengths of the other two sides. Then the Pythagorean Theorem can be stated as this equation: Using the Pythagorean Theorem, if the lengths of any two of the sides of a right triangle are known and it is known which side is the hypotenuse, then the length of the third side can be determined from the formula. Sine, Cosine, and Tangent for Right Triangles Sine, Cosine, and Tangent are all functions of an angle, which are useful in right triangle calculations. For an angle designated as θ, the sine function is abbreviated as sin θ, the cosine function is abbreviated as cos θ, and the tangent function is abbreviated as tan θ. For any angle θ, sin θ, cos θ, and tan θ are each single determined values and if θ is a known value, sin θ, cos θ, and tan θ can be looked up in a table or found with a calculator. There is a table listing these function values at the end of this section. For an angle between listed values, the sine, cosine, or tangent of that angle can be estimated from the values in the table. Conversely, if a number is known to be the sine, cosine, or tangent of a angle, then such tables could be used in reverse to find (or estimate) the value of a corresponding angle. These three functions are related to right triangles in the following ways: In a right triangle, - the sine of a non-right angle equals the length of the leg opposite that angle divided by the length of the hypotenuse. - the cosine of a non-right angle equals the length of the leg adjacent to it divided by the length of the hypotenuse. - the tangent of a non-right angle equals the length of the leg opposite that angle divided by the length of the leg adjacent to it. For any value of θ where cos θ ≠ 0, If one considers the diagram representing a right triangle with the two non-right angles θ1and θ2, and the side lengths a,b,c as shown here: For the functions of angle θ1: Analogously, for the functions of angle θ2: Table of sine, cosine, and tangent for angles θ from 0 to 90° |θ in degrees||θ in radians||sin θ||cos θ||tan θ| General rules for important angles: Polyominoes are shapes made from connecting unit squares together, though certain connections are not allowed. A domino is the shape made from attaching unit squares so that they share one full edge. The term polyomino is based on the word domino. There is only one possible domino. Tromino↑Jump back a section A polymino made from four squares is called a tetromino. There are five possible combinations and two reflections: A polymino made from five squares is called a pentomino. There are twelve possible pentominoes, excluding mirror images and rotations. Ellipses are sometimes called ovals. Ellipses contain two foci. The sum of the distance from a point on the ellipse to one focus and that same point to the other focus is constant Area Shapes Extended into 3rd Dimension Geometry/Area Shapes Extended into 3rd Dimension Area Shapes Extended into 3rd Dimension Linearly to a Line or Point Geometry/Area Shapes Extended into 3rd Dimension Linearly to a Line or Point Ellipsoids and Spheres Geometry/Ellipsoids and Spheres Suppose you are an astronomer in America. You observe an exciting event (say, a supernova) in the sky and would like to tell your colleagues in Europe about it. Suppose the supernova appeared at your zenith. You can't tell astronomers in Europe to look at their zenith because their zenith points in a different direction. You might tell them which constellation to look in. This might not work, though, because it might be too hard to find the supernova by searching an entire constellation. The best solution would be to give them an exact position by using a coordinate system. On Earth, you can specify a location using latitude and longitude. This system works by measuring the angles separating the location from two great circles on Earth (namely, the equator and the prime meridian). Coordinate systems in the sky work in the same way. The equatorial coordinate system is the most commonly used. The equatorial system defines two coordinates: right ascension and declination, based on the axis of the Earth's rotation. The declination is the angle of an object north or south of the celestial equator. Declination on the celestial sphere corresponds to latitude on the Earth. The right ascension of an object is defined by the position of a point on the celestial sphere called the vernal equinox. The further an object is east of the vernal equinox, the greater its right ascension. A coordinate system is a system designed to establish positions with respect to given reference points. The coordinate system consists of one or more reference points, the styles of measurement (linear measurement or angular measurement) from those reference points, and the directions (or axes) in which those measurements will be taken. In astronomy, various coordinate systems are used to precisely define the locations of astronomical objects. Latitude and longitude are used to locate a certain position on the Earth's surface. The lines of latitude (horizontal) and the lines of longitude (vertical) make up an invisible grid over the earth. Lines of latitude are called parallels. Lines of longitude aren't completely straight (they run from the exact point of the north pole to the exact point of the south pole) so they are called meridians. 0 degrees latitude is the Earth's middle, called the equator. O degrees longitude was tricky because there really is no middle of the earth vertically. It was finally agreed that the observatory in Greenwich, U.K. would be 0 degrees longitude due to its significant roll in scientific discoveries and creating latitude and longitude. 0 degrees longitude is called the prime meridian. Latitude and longitude are measured in degrees. One degree is about 69 miles. There are sixty minutes (') in a degree and sixty seconds (") in a minute. These tiny units make GPS's (Global Positioning Systems) much more exact. There are a few main lines of latitude:the Arctic Circle, the Antarctic Circle, the Tropic of Cancer, and the Tropic of Capricorn. The Antarctic Circle is 66.5 degrees south of the equator and it marks the temperate zone from the Antarctic zone. The Arctic Circle is an exact mirror in the north. The Tropic of Cancer separates the tropics from the temperate zone. It is 23.5 degrees north of the equator. It is mirrored in the south by the Tropic of Capricorn. Horizontal coordinate system One of the simplest ways of placing a star on the night sky is the coordinate system based on altitude or azimuth, thus called the Alt-Az or horizontal coordinate system. The reference circles for this system are the horizon and the celestial meridian, both of which may be most easily graphed for a given location using the celestial sphere. In simplest terms, the altitude is the angle made from the position of the celestial object (e.g. star) to the point nearest it on the horizon. The azimuth is the angle from the northernmost point of the horizon (which is also its intersection with the celestial meridian) to the point on the horizon nearest the celestial object. Usually azimuth is measured eastwards from due north. So east has az=90°, south has az=180°, west has az=270° and north has az=360° (or 0°). An object's altitude and azimuth change as the earth rotates. Equatorial coordinate system The equatorial coordinate system is another system that uses two angles to place an object on the sky: right ascension and declination. Ecliptic coordinate system The ecliptic coordinate system is based on the ecliptic plane, i.e., the plane which contains our Sun and Earth's average orbit around it, which is tilted at 23°26' from the plane of Earth's equator. The great circle at which this plane intersects the celestial sphere is the ecliptic, and one of the coordinates used in the ecliptic coordinate system, the ecliptic latitude, describes how far an object is to ecliptic north or to ecliptic south of this circle. On this circle lies the point of the vernal equinox (also called the first point of Aries); ecliptic longitude is measured as the angle of an object relative to this point to ecliptic east. Ecliptic latitude is generally indicated by φ, whereas ecliptic longitude is usually indicated by λ. Galactic coordinate system As a member of the Milky Way Galaxy, we have a clear view of the Milky Way from Earth. Since we are inside the Milky Way, we don't see the galaxy's spiral arms, central bulge and so forth directly as we do for other galaxies. Instead, the Milky Way completely encircles us. We see the Milky Way as a band of faint starlight forming a ring around us on the celestial sphere. The disk of the galaxy forms this ring, and the bulge forms a bright patch in the ring. You can easily see the Milky Way's faint band from a dark, rural location. Our galaxy defines another useful coordinate system — the galactic coordinate system. This system works just like the others we've discussed. It also uses two coordinates to specify the position of an object on the celestial sphere. The galactic coordinate system first defines a galactic latitude, the angle an object makes with the galactic equator. The galactic equator has been selected to run through the center of the Milky Way's band. The second coordinate is galactic longitude, which is the angular separation of the object from the galaxy's "prime meridian," the great circle that passes through the Galactic center and the galactic poles. The galactic coordinate system is useful for describing an object's position with respect to the galaxy's center. For example, if an object has high galactic latitude, you might expect it to be less obstructed by interstellar dust. Transformations between coordinate systems One can use the principles of spherical trigonometry as applied to triangles on the celestial sphere to derive formulas for transforming coordinates in one system to those in another. These formulas generally rely on the spherical law of cosines, known also as the cosine rule for sides. By substituting various angles on the celestial sphere for the angles in the law of cosines and by thereafter applying basic trigonometric identities, most of the formulas necessary for coordinate transformations can be found. The law of cosines is stated thus: To transform from horizontal to equatorial coordinates, the relevant formulas are as follows: where RA is the right ascension, Dec is the declination, LST is the local sidereal time, Alt is the altitude, Az is the azimuth, and Lat is the observer's latitude. Using the same symbols and formulas, one can also derive formulas to transform from equatorial to horizontal coordinates: Transformation from equatorial to ecliptic coordinate systems can similarly be accomplished using the following formulas: where RA is the right ascension, Dec is the declination, φ is the ecliptic latitude, λ is the ecliptic longitude, and ε is the tilt of Earth's axis relative to the ecliptic plane. Again, using the same formulas and symbols, new formulas for transforming ecliptic to equatorial coordinate systems can be found: - Traditional Geometry: A topological space is a set X, and a collection of subsets of X, C such that both the empty set and X are contained in C and the union of any subcollection of sets in C and the intersection of any finite subcollection of sets in C are also contained within C. The sets in C are called open sets. Their complements relative to X are called closed sets. Given two topological spaces, X and Y, a map f from X to Y is continuous if for every open set U of Y, f−1(U) is an open set of X. Hyperbolic and Elliptic Geometry There are precisely three different classes of three-dimensional constant-curvature geometry: Euclidean, hyperbolic and elliptic geometry. The three geometries are all built on the same first four axioms, but each has a unique version of the fifth axiom, also known as the parallel postulate. The 1868 Essay on an Interpretation of Non-Euclidean Geometry by Eugenio Beltrami (1835 - 1900) proved the logical consistency of the two Non-Euclidean geometries, hyperbolic and elliptic. The Parallel Postulate The parallel postulate is as follows for the corresponding geometries. Euclidean geometry: Playfair's version: "Given a line l and a point P not on l, there exists a unique line m through P that is parallel to l." Euclid's version: "Suppose that a line l meets two other lines m and n so that the sum of the interior angles on one side of l is less than 180°. Then m and n intersect in a point on that side of l." These two versions are equivalent; though Playfair's may be easier to conceive, Euclid's is often useful for proofs. Hyperbolic geometry: Given an arbitrary infinite line l and any point P not on l, there exist two or more distinct lines which pass through P and are parallel to l. Elliptic geometry: Given an arbitrary infinite line l and any point P not on l, there does not exist a line which passes through P and is parallel to l. Hyperbolic geometry is also known as saddle geometry or Lobachevskian geometry. It differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. Some of these remarkable consequences of this geometry's unique fifth postulate include: 1. The sum of the three interior angles in a triangle is strictly less than 180°. Moreover, the angle sums of two distinct triangles are not necessarily the same. 2. Two triangles with the same interior angles have the same area. Models of Hyperbolic Space The following are four of the most common models used to describe hyperbolic space. 1. The Poincaré Disc Model. Also known as the conformal disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by arcs of circles that are orthogonal to the boundary circle and by diameters of the boundary circle. Preserves hyperbolic angles. 2. The Klein Model. Also known as the Beltrami-Klein model or projective disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by chords of the circle. This model gives a misleading visual representation of the magnitude of angles. 3. The Poincaré Half-Plane Model. The hyperbolic plane is represented by one-half of the Euclidean plane, as defined by a given Euclidean line l, where l is not considered part of the hyperbolic space. Lines are represented by half-circles orthogonal to l or rays perpendicular to l. Preserves hyperbolic angles. 4. The Lorentz Model. Spheres in Lorentzian four-space. The hyperbolic plane is represented by a two-dimensional hyperboloid of revolution embedded in three-dimensional Minkowski space. Based on this geometry's definition of the fifth axiom, what does parallel mean? The following definitions are made for this geometry. If a line l and a line m do not intersect in the hyperbolic plane, but intersect at the plane's boundary of infinity, then l and m are said to be parallel. If a line p and a line q neither intersect in the hyperbolic plane nor at the boundary at infinity, then p and q are said to be ultraparallel. The Ultraparallel Theorem For any two lines m and n in the hyperbolic plane such that m and n are ultraparallel, there exists a unique line l that is perpendicular to both m and n. Elliptic geometry differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. For example, directly from this geometry's fifth axiom we have that there exist no parallel lines. Some of the other remarkable consequences of the parallel postulate include: The sum of the three interior angles in a triangle is strictly greater than 180°. Models of Elliptic Space Spherical geometry gives us perhaps the simplest model of elliptic geometry. Points are represented by points on the sphere. Lines are represented by circles through the points. - Euclid's First Four Postulates - Euclid's Fifth Postulate - Incidence Geometry - Projective and Affine Planes (necessary?) - Axioms of Betweenness - Pasch and Crossbar - Axioms of Congruence - Continuity (necessary?) - Hilbert Planes - Neutral Geometry If you would like to request anything in this topic please post it below. - Modern geometry - An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle = Geometry/An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle
http://en.m.wikibooks.org/wiki/Geometry/Print_version
13
21
It was developed by Charles Spearman in early 1900s and as such this test is also called as Spearman’s rank correlation coefficient. In statistical analysis situation arises when the data are not available to use in numerical form for correlation analysis, but the information is sufficient to rank the data as first, second, third and so on, in this kind of situations we use quite often the rank correlation method and work out the coefficient of rank correlation. These latest developments are all covered in the Statistic Homework help, Assignment help at transtutors.com The rank correlation coefficientin fact is a measure of association which is based on the ranks of the observations and not on the numerical values of the data. In performing this, for calculating rank correlation coefficient, at first the actual observation to be replaced by their ranks, the highest value is given rank 1, rank 2 to next highest one and by following this particular order ranks are assigned for all the values. If two or more values seem to be equal, the calculated average of the ranks that should have been assigned to such values had all of them be different, is then taken and the same rank (equal to the calculated average) is given to the concerning values. The next step is to record the difference between ranks for each pair of observations and then square these differences to get a total value of such differences. Then finally the rank correlation coefficient is worked out. Here n denotes the number of paired observations. The value of Spearman’s rank correlation coefficient will always vary between -1 and +1. Spearman’s rank correlation is also known as “grade correlation”. Basically it is a non parametric measure of statistical dependence between two variables. This test makes an assessment of how well the relationship between two variables can be described using a monotonic function. All such methods are covered inStatistic Homework help, Assignment help at transtutors.com Steps involved in Spearman’s rank correlation test: The null hypothesis - "There is no relationship between the two sets of data." Ranking both sets of data from highest to the lowest position and checking for tied ranks. Subtract the two sets of ranks to get the difference d and Square the values of d. Add the squared values of d to get Sigma d2. The next step is to use the formula $ = 1-(6Sigma d2/n3-n) where n is the number of ranks there is a perfect negative correlation If the $ value is -1, if falls between -1 and -0.5, there is a strong negative correlation, if falls between -0.5 and 0, there is a weak negative correlation, if it is 0, there is no correlation, if falls between 0 and 0.5, there is a weak positive correlation, if falls between 0.5 and 1, there is a strong positive correlation, if it is 1, there is a perfect positive correlation between the two data sets. The null hypothesisis accepted if $ value is 0, otherwise it is rejected. Whenever the objective is to know if two variables are related to each other, the correlation technique is used. Our email-based homework help support provides best and intelligent insight and recreation which help make the subject practical and pertinent for any assignment help. Transtutors.com present timely homework help at logical charges with detailed answers to your Statistic questions so that you get to understand your assignments or homework better apart from having the answers. Our tutors are remarkably qualified and have years of experience providing Spearman Rank Correlation Test homework help or assignment help.
http://www.transtutors.com/homework-help/statistics/nonparametric-tests/spearman-rank-correlation/
13
15
Analysis of variance Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as "variation" among and between groups). In ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes t-test to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing (testing) three or more means (groups or variables) for statistical significance. Background and terminology ANOVA is a particular form of statistical hypothesis testing heavily used in the analysis of experimental data. A statistical hypothesis test is a method of making decisions using data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result (when a probability (p-value) is less than a threshold (significance level)) justifies the rejection of the null hypothesis. In the typical application of ANOVA, the null hypothesis is that all groups are simply random samples of the same population. This implies that all treatments have the same effect (perhaps none). Rejecting the null hypothesis implies that different treatments result in altered effects. By construction, hypothesis testing limits the rate of Type I errors (false positives leading to false scientific claims) to a significance level. Experimenters also wish to limit Type II errors (false negatives resulting in missed scientific discoveries). The Type II error rate is a function of several things including sample size (positively correlated with experiment cost), significance level (when the standard of proof is high, the chances of overlooking a discovery are also high) and effect size (when the effect is obvious to the casual observer, Type II error rates are low). The terminology of ANOVA is largely from the statistical design of experiments. The experimenter adjusts factors and measures responses in an attempt to determine an effect. Factors are assigned to experimental units by a combination of randomization and blocking to ensure the validity of the results. Blinding keeps the weighing impartial. Responses show a variability that is partially the result of the effect and is partially random error. ANOVA is the synthesis of several ideas and it is used for multiple purposes. As a consequence, it is difficult to define concisely or precisely. "Classical ANOVA for balanced data does three things at once: - As exploratory data analysis, an ANOVA is an organization of an additive data decomposition, and its sums of squares indicate the variance of each component of the decomposition (or, equivalently, each set of terms of a linear model). - Comparisons of mean squares, along with F-tests ... allow testing of a nested sequence of models. - Closely related to the ANOVA is a linear model fit with coefficient estimates and standard errors." In short, ANOVA is a statistical tool used in several ways to develop and confirm an explanation for the observed data. - It is computationally elegant and relatively robust against violations to its assumptions. - ANOVA provides industrial strength (multiple sample comparison) statistical analysis. - It has been adapted to the analysis of a variety of experimental designs. As a result: ANOVA "has long enjoyed the status of being the most used (some would say abused) statistical technique in psychological research." ANOVA "is probably the most useful technique in the field of statistical inference." ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs being notorious. In some cases the proper application of the method is best determined by problem pattern recognition followed by the consultation of a classic authoritative test. (Condensed from the NIST Engineering Statistics handbook: Section 5.7. A Glossary of DOE Terminology.) - Balanced design - An experimental design where all cells (i.e. treatment combinations) have the same number of observations. - A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. The reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization. - A set of experimental runs which allows the fit of a particular model and the estimate of effects. - Design of experiments. An approach to problem solving involving collection of data that will support valid, defensible, and supportable conclusions. - How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect. - Unexplained variation in a collection of observations. DOE's typically require understanding of both random error and lack of fit error. - Experimental unit - The entity to which a specific treatment combination is applied. - Process inputs an investigator manipulates to cause a change in the output. - Lack-of-fit error - Error that occurs when the analysis omits one or more important terms or factors from the process model. Including replication in a DOE allows separation of experimental error into its components: lack of fit and random (pure) error. - Mathematical relationship which relates changes in a given response to changes in one or more factors. - Random error - Error that occurs due to natural variation in the process. Random error is typically assumed to be normally distributed with zero mean and a constant variance. Random error is also called experimental error. - A schedule for allocating treatment material and for conducting treatment combinations in a DOE such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs.[nb 1] - Performing the same treatment combination more than once. Including replication allows an estimate of the random error independent of any lack of fit error. - The output(s) of a process. Sometimes called dependent variable(s). - A treatment is a specific combination of factor levels whose effect is to be compared with other treatments. Classes of models There are three classes of models used in the analysis of variance, and these are outlined here. The fixed-effects model of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see if the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole. Random effects models are used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model. A mixed-effects model contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types. Example: Teaching experiments could be performed by a university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives. Defining fixed and random effects has proven elusive, with competing definitions arguably leading toward a linguistic quagmire. Assumptions of ANOVA The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Even when the statistical model is nonlinear, it can be approximated by a linear model for which an analysis of variance may be appropriate. Textbook analysis using a normal distribution - Independence of observations – this is an assumption of the model that simplifies the statistical analysis. - Normality – the distributions of the residuals are normal. - Equality (or "homogeneity") of variances, called homoscedasticity — the variance of data in groups should be the same. The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors ('s) are independent and In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox. In its simplest form, the assumption of unit-treatment additivity[nb 2] states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is The assumption of unit-treatment addivity implies that, for every treatment , the th treatment have exactly the same effect on every experiment unit. The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant. The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling. Derived linear model Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent! The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments. Statistical models for observational data However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public. Summary of assumptions The normal-model based ANOVA analysis assumes the independence, normality and homogeneity of the variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis. However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest. Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model. According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition. Characteristics of ANOVA ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance results are independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding. Logic of ANOVA The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial, "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean." Partitioning of the sum of squares ANOVA uses traditional standardized terminology. The definitional equation of sample variance is , where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. If the null hypothesis is true, all three variance estimates are equal (within sampling error). The fundamental technique is a partitioning of the total sum of squares SS into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels. The number of degrees of freedom DF can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect. See also Lack-of-fit sum of squares. The F-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic where MS is mean square, = number of treatments and = total number of cases to the F-distribution with , degrees of freedom. Using the F-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution. The expected value of F is (where n is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1 the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls. The textbook method of concluding the hypothesis test is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the numerator degrees of freedom, the denominator degrees of freedom and the significance level (α). If F ≥ FCritical (Numerator DF, Denominator DF, α) then reject the null hypothesis. The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (α). The two methods produce the same result. The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (maximizing power for a fixed significance level). To test the hypothesis that all treatments have exactly the same effect, the F-test's p-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum.[nb 3] The ANOVA F–test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.[nb 4] ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..." ANOVA for a single factor The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. A relatively complete discussion of the analysis (models, data summaries, ANOVA table) of the completely randomized experiment is available. ANOVA for multiple factors ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used. The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare. The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results. Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot. A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications. Worked numeric examples Some analysis is required in support of the design of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. The number of experimental units In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential. Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards. Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confident interval. Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true. Several standardized measures of effect gauge the strength of the association between a predictor (or set of predictors) and the dependent variable. Effect-size estimates facilitate the comparison of findings in studies and across disciplines. A non-standardized measure of effect size with meaningful units may be preferred for reporting purposes. η2 ( eta-squared ): Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). On average it overestimates the variance explained in the population. As the sample size gets larger the amount of bias gets smaller, Cohen (1992) suggests effect sizes for various indexes, including ƒ (where 0.1 is a small effect, 0.25 is a medium effect and 0.4 is a large effect). He also offers a conversion table (see Cohen, 1988, p. 283) for eta squared (η2) where 0.0099 constitutes a small effect, 0.0588 a medium effect and 0.1379 a large effect. It is always appropriate to carefully consider outliers. They have a disproportionate impact on statistical conclusions and are often the result of errors. It is prudent to verify that the assumptions of ANOVA have been met. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. One rule of thumb: "If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results will still be approximately correct." A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and post hoc tests are performed after looking at the data. Often one of the "treatments" is none, so the treatment group can act as a control. Dunnett's test (a modification of the t-test) tests whether each of the other treatment groups has the same mean as the control. Post hoc tests such as Tukey's range test most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for Type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Following ANOVA with pair-wise multiple-comparison tests has been criticized on several grounds. There are many such tests (10 in one table) and recommendations regarding their use are vague or conflicting. Study designs and ANOVAs There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model. Some popular designs use the following types of ANOVA: - One-way ANOVA is used to test for differences among two or more independent groups (means),e.g. different levels of urea application in a crop. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2. - Factorial ANOVA is used when the experimenter wants to study the interaction effects among the treatments. - Repeated measures ANOVA is used when the same subjects are used for each treatment (e.g., in a longitudinal study). - Multivariate analysis of variance (MANOVA) is used when there is more than one response variable. Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; Unbalanced experiments offer more complexity. For single factor (one way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and F-ratios will depend on the order in which the sources of variation are considered." The simplest techniques for handling unbalanced data restore balance by either throwing out data or by synthesizing missing data. More complex techniques use regression. ANOVA is (in part) a significance test. The American Psychological Association holds the view that simply reporting significance is insufficient and that reporting confidence bounds is preferred. ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized. While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. The development of least-squares methods by Laplace and Gauss circa 1800 provided an improved method of combining observations (over the existing practices of astronomy and geodesy). It also initiated much study of the contributions to sums of squares. Laplace soon knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827 Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800 astronomers had isolated observational errors resulting from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885. Sir Ronald Fisher introduced the term "variance" and proposed a formal analysis of variance in a 1918 article The Correlation Between Relatives on the Supposition of Mendelian Inheritance. His first application of the analysis of variance was published in 1921. Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers. One of the attributes of ANOVA which ensured its early popularity was computational elegance. The structure of the additive model allows solution for the additive coefficients by simple algebra rather than by matrix calculations. In the era of mechanical calculators this simplicity was critical. The determination of statistical significance also required access to tables of the F function which were supplied by early statistics texts. |Wikimedia Commons has media related to: Analysis of variance| - Randomization is a term used in multiple ways in this material. "Randomization has three roles in applications: as a device for eliminating biases, for example from unobserved explanatory variables and selection effects: as a basis for estimating standard errors: and as a foundation for formally exact significance tests." Cox (2006, page 192) Hinkelmann and Kempthorne use randomization both in experimental design and for statistical analysis. - Unit-treatment additivity is simply termed additivity in most texts. Hinkelmann and Kempthorne add adjectives and distinguish between additivity in the strict and broad senses. This allows a detailed consideration of multiple error sources (treatment, state, selection, measurement and sampling) on page 161. - Rosenbaum (2002, page 40) cites Section 5.7 (Permutation Tests), Theorem 2.3 (actually Theorem 3, page 184) of Lehmann's Testing Statistical Hypotheses (1959). - The F-test for the comparison of variances has a mixed reputation. It is not recommended as a hypothesis test to determine whether two different samples have the same variance. It is recommended for ANOVA where two estimates of the variance of the same sample are compared. While the F-test is not generally robust against departures from normality, it has been found to be robust in the special case of ANOVA. Citations from Moore & McCabe (2003): "Analysis of variance uses F statistics, but these are not the same as the F statistic for comparing two population standard deviations." (page 554) "The F test and other procedures for inference about variances are so lacking in robustness as to be of little use in practice." (page 556) "[The ANOVA F test] is relatively insensitive to moderate nonnormality and unequal variances, especially when the sample sizes are similar." (page 763) ANOVA assumes homoscedasticity, but it is robust. The statistical test for homoscedasticity (the F-test) is not robust. Moore & McCabe recommend a rule of thumb. - Gelman (2005, p 2) - Howell (2002, p 320) - Montgomery (2001, p 63) - Gelman (2005, p 1) - Gelman (2005, p 5) - "Section 5.7. A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 5 April 2012. - "Section 4.3.1 A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 14 Aug 2012. - Montgomery (2001, Chapter 12: Experiments with random factors) - Gelman (2005, pp 20–21) - Snedecor, George W.; Cochran, William G. (1967). Statistical Methods (6th ed.). p. 321. - Cochran & Cox (1992, p 48) - Howell (2002, p 323) - Anderson, David R.; Sweeney, Dennis J.; Williams, Thomas A. (1996). Statistics for business and economics (6th ed.). Minneapolis/St. Paul: West Pub. Co. pp. 452–453. ISBN 0-314-06378-1. - Anscombe (1948) - Kempthorne (1979, p 30) - Cox (1958, Chapter 2: Some Key Assumptions) - Hinkelmann and Kempthorne (2008, Volume 1, Throughout. Introduced in Section 2.3.3: Principles of experimental design; The linear model; Outline of a model) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.3: Completely Randomized Design; Derived Linear Model) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.6: Completely randomized design; Approximating the randomization test) - Bailey (2008, Chapter 2.14 "A More General Model" in Bailey, pp. 38–40) - Hinkelmann and Kempthorne (2008, Volume 1, Chapter 7: Comparison of Treatments) - Kempthorne (1979, pp 125–126, "The experimenter must decide which of the various causes that he feels will produce variations in his results must be controlled experimentally. Those causes that he does not control experimentally, because he is not cognizant of them, he must control by the device of randomization." "[O]nly when the treatments in the experiment are applied by the experimenter using the full randomization procedure is the chain of inductive inference sound. It is only under these circumstances that the experimenter can attribute whatever effects he observes to the treatment and the treatment only. Under these circumstances his conclusions are reliable in the statistical sense.") - Freedman[full citation needed] - Montgomery (2001, Section 3.8: Discovering dispersion effects) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.10: Completely randomized design; Transformations) - Bailey (2008) - Montgomery (2001, Section 3-3: Experiments with a single factor: The analysis of variance; Analysis of the fixed effects model) - Cochran & Cox (1992, p 2 example) - Cochran & Cox (1992, p 49) - Hinkelmann and Kempthorne (2008, Volume 1, Section 6.7: Completely randomized design; CRD with unequal numbers of replications) - Moore and McCabe (2003, page 763) - Gelman (2008) - Montgomery (2001, Section 5-2: Introduction to factorial designs; The advantages of factorials) - Belle (2008, Section 8.4: High-order interactions occur rarely) - Montgomery (2001, Section 5-1: Introduction to factorial designs; Basic definitions and principles) - Cox (1958, Chapter 6: Basic ideas about factorial experiments) - Montgomery (2001, Section 5-3.7: Introduction to factorial designs; The two-factor factorial design; One observation per cell) - Wilkinson (1999, p 596) - Montgomery (2001, Section 3-7: Determining sample size) - Howell (2002, Chapter 8: Power) - Howell (2002, Section 11.12: Power (in ANOVA)) - Howell (2002, Section 13.7: Power analysis for factorial experiments) - Moore and McCabe (2003, pp 778–780) - Wilkinson (1999, p 599) - Montgomery (2001, Section 3-4: Model adequacy checking) - Moore and McCabe (2003, p 755, Qualifications to this rule appear in a footnote.) - Montgomery (2001, Section 3-5.8: Experiments with a single factor: The analysis of variance; Practical interpretation of results; Comparing means with a control) - Hinkelmann and Kempthorne (2008, Volume 1, Section 7.5: Comparison of Treatments; Multiple Comparison Procedures) - Howell (2002, Chapter 12: Multiple comparisons among treatment means) - Montgomery (2001, Section 3-5: Practical interpretation of results) - Cochran & Cox (1957, p 9, "[T]he general rule [is] that the way in which the experiment is conducted determines not only whether inferences can be made, but also the calculations required to make them.") - "The Probable Error of a Mean". Biometrika 6: 1–0. 1908. doi:10.1093/biomet/6.1.1. - Montgomery (2001, Section 3-3.4: Unbalanced data) - Montgomery (2001, Section 14-2: Unbalanced data in factorial design) - Wilkinson (1999, p 600) - Gelman (2005, p.1) (with qualification in the later text) - Montgomery (2001, Section 3.9: The Regression Approach to the Analysis of Variance) - Howell (2002, p 604) - Howell (2002, Chapter 18: Resampling and nonparametric approaches to data) - Montgomery (2001, Section 3-10: Nonparametric methods in the analysis of variance) - Stigler (1986) - Stigler (1986, p 134) - Stigler (1986, p 153) - Stigler (1986, pp 154–155) - Stigler (1986, pp 240–242) - Stigler (1986, Chapter 7 - Psychophysics as a Counterpoint) - Stigler (1986, p 253) - Stigler (1986, pp 314–315) - The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Ronald A. Fisher. Philosophical Transactions of the Royal Society of Edinburgh. 1918. (volume 52, pages 399–433) - On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. Ronald A. Fisher. Metron, 1: 3-32 (1921) - Scheffé (1959, p 291, "Randomization models were first formulated by Neyman (1923) for the completely randomized design, by Neyman (1935) for randomized blocks, by Welch (1937) and Pitman (1937) for the Latin square under a certain null hypothesis, and by Kempthorne (1952, 1955) and Wilk (1955) for many other designs.") - Anscombe, F. J. (1948). "The Validity of Comparative Experiments". Journal of the Royal Statistical Society. Series A (General) 111 (3): 181–211. doi:10.2307/2984159. JSTOR 2984159. MR 30181. - Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line. - Belle, Gerald van (2008). Statistical rules of thumb (2nd ed.). Hoboken, N.J: Wiley. ISBN 978-0-470-14448-0. - Cochran, William G.; Cox, Gertrude M. (1992). Experimental designs (2nd ed.). New York: Wiley. ISBN 978-0-471-54567-5. - Cohen, Jacob (1988). Statistical power analysis for the behavior sciences (2nd ed.). Routledge ISBN 978-0-8058-0283-2 - Cohen, Jacob (1992). "Statistics a power primer". Psychology Bulletin 112 (1): 155–159. doi:10.1037/0033-2909.112.1.155. PMID 19565683. - Cox, David R. (1958). Planning of experiments. Reprinted as ISBN 978-0-471-57429-3 - Cox, D. R. (2006). Principles of statistical inference. Cambridge New York: Cambridge University Press. ISBN 978-0-521-68567-2. - Freedman, David A.(2005). Statistical Models: Theory and Practice, Cambridge University Press. ISBN 978-0-521-67105-7 - Gelman, Andrew (2005). "Analysis of variance? Why it is more important than ever". The Annals of Statistics 33: 1–53. doi:10.1214/009053604000001048. - Gelman, Andrew (2008). "Variance, analysis of". The new Palgrave dictionary of economics (2nd ed.). Basingstoke, Hampshire New York: Palgrave Macmillan. ISBN 978-0-333-78676-5. - Hinkelmann, Klaus & Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7. - Howell, David C. (2002). Statistical methods for psychology (5th ed.). Pacific Grove, CA: Duxbury/Thomson Learning. ISBN 0-534-37770-X. - Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0-88275-105-0. - Lehmann, E.L. (1959) Testing Statistical Hypotheses. John Wiley & Sons. - Montgomery, Douglas C. (2001). Design and Analysis of Experiments (5th ed.). New York: Wiley. ISBN 978-0-471-31649-7. - Moore, David S. & McCabe, George P. (2003). Introduction to the Practice of Statistics (4e). W H Freeman & Co. ISBN 0-7167-9657-0 - Rosenbaum, Paul R. (2002). Observational Studies (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-98967-9 - Scheffé, Henry (1959). The Analysis of Variance. New York: Wiley. - Stigler, Stephen M. (1986). The history of statistics : the measurement of uncertainty before 1900. Cambridge, Mass: Belknap Press of Harvard University Press. ISBN 0-674-40340-1. - Wilkinson, Leland (1999). "Statistical Methods in Psychology Journals; Guidelines and Explanations". American Psychologist 54 (8): 594–604. doi:10.1037/0003-066X.54.8.594. - Box, G. E. P. (1953). "Non-Normality and Tests on Variances". Biometrika (Biometrika Trust) 40 (3/4): 318–335. JSTOR 2333350. - Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, I. Effect of Inequality of Variance in the One-Way Classification". The Annals of Mathematical Statistics 25 (2): 290. doi:10.1214/aoms/1177728786. - Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, II. Effects of Inequality of Variance and of Correlation Between Errors in the Two-Way Classification". The Annals of Mathematical Statistics 25 (3): 484. doi:10.1214/aoms/1177728717. - Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics 150. New York: Springer-Verlag. ISBN 0-387-98578-6. - Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2. - Cox, David R. & Reid, Nancy M. (2000). The theory of design of experiments. (Chapman & Hall/CRC). ISBN 978-1-58488-195-7 - Fisher, Ronald (1918). "Studies in Crop Variation. I. An examination of the yield of dressed grain from Broadbalk". Journal of Agricultural Science 11: 107–135. - Freedman, David A.; Pisani, Robert; Purves, Roger (2007) Statistics, 4th edition. W.W. Norton & Company ISBN 978-0-393-92972-0 - Hettmansperger, T. P.; McKean, J. W. (1998). Robust nonparametric statistical methods. Kendall's Library of Statistics 5 (First ed.). New York: Edward Arnold. pp. xiv+467 pp. ISBN 0-340-54937-8. MR 1604954. Unknown parameter - Lentner, Marvin; Thomas Bishop (1993). Experimental design and analysis (Second ed.). P.O. Box 884, Blacksburg, VA 24063: Valley Book Company. ISBN 0-9616255-2-X. - Tabachnick, Barbara G. & Fidell, Linda S. (2007). Using Multivariate Statistics (5th ed.). Boston: Pearson International Edition. ISBN 978-0-205-45938-4 - Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455. |Wikiversity has learning materials about Analysis of variance| - SOCR ANOVA Activity and interactive applet. - Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R - NIST/SEMATECH e-Handbook of Statistical Methods, section 7.4.3: "Are the means equal?"
http://en.wikipedia.org/wiki/Analysis_of_variance
13
19
Unit 2 Idealism, Realism and Pragmatism in Education By the end of this topic, you should be able to: 1. Explain major world views of philosophies: idealism, realism, and pragmatism; and 2. Identify the contributions of the world views of philosophies, such as idealism, realism, and pragmatism to the field of education. Traditionally, philosophical methods have consisted of analysis and clarification of concepts, arguments, theories, and language. Philosophers have analyzed theories and arguments; by enhancing previous arguments, raising powerful objections that lead to the revision or abandonment of theories and lines of arguments (Noddings, 1998). This topic will provide readers with some general knowledge of philosophies. Basically, there are three general or world philosophies that are idealism, realism, and pragmatism. Educators confront philosophical issues on a daily basis, often not recognizing them as such. In fact, in the daily practice of educators, they formulate goals, discuss values, and set priorities. Hence, educators who gets involved in dealing with goals, values and priorities soon realizes that in a Philosophy is concerned primarily with identifying beliefs about human existence and evaluating arguments that support those beliefs. Develop a set of questions that may drive philosophical investigations. 7. 1 IDEALISM In the Western culture, idealism is perhaps the oldest systematic philosophy, dating back at least to Plato in ancient Idealism is the philosophical theory that maintains that the ultimate nature of reality is based on mind or ideas. It holds that the so-called external or real world is inseparable from mind, consciousness, or perception. Idealism is any philosophy which argues that the only things knowable are consciousness or the contents of consciousness; not anything in the outside world, if such a place actually exists. Indeed, idealism often takes the form of arguing that the only real things are mental entities, not physical things and argues that reality is somehow dependent upon the mind rather than independent of it. Some narrow versions of idealism argue that our understanding of reality reflects the workings of our mind, first and foremost, that the properties of objects have no standing independent of minds perceiving them. Besides, the nature and identity of the mind in idealism upon which reality is dependent is one issue that has divided idealists of various sorts. Some argue that there is some objective mind outside of nature; some argue that it is simply the common power of reason or rationality; some argue that it is the collective mental faculties of society; and some focus simply on the minds of individual human beings. In short, the main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as idea-ism. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. To achieve a sufficient understanding of idealism, it is necessary to examine the works of selected outstanding philosophers usually associated with this philosophy. Idealism comes in several flavors: (a) Platonic idealism - there exists a perfect realm of form and ideas and our world merely contains shadows of that realm; only ideas can be known or have any reality; (b) Religious idealism - this theory argues that all knowledge originates in perceived phenomena which have been organized by categories. (c) Modern idealism - all objects are identical with some idea and the ideal knowledge is itself the system of ideas. How does modern idealism compare with the other idealism of earlier periods? Discuss. 7.1.1 Platonic Idealism Plato was a Greek philosopher during the 4th century B.C.E. - a student of Socrates and teacher of Aristotle. is an ancient school of philosophy founded by Plato. At the beginning, this school had a physical existence at a site just outside the walls of is an ancient school of philosophy founded by Plato. At the beginning, this school had a physical existence at a site just outside the walls of According to Platonic idealism, there exists a perfect realm of form and ideas and our world merely contains shadows of that realm. Plato was a follower of Socrates, a truly innovative thinker of his time, who did not record his ideas, but shared them orally through a question and answer approach. Plato presented his ideas in two works: The Republic and Laws. He believed in the importance of searching for truth because truth was perfect and eternal. He wrote about separating the world of ideas from the world of matter. Ideas are constant, but in the world of matter, information and ideas are constantly changing because of their sensory nature. Therefore Plato’s idealism suggested moving from opinion to true knowledge in the form of critical discussions, or the dialectic. Since at the end of the discussion, the ideas or opinions will begin to synthesize as they work closer to truth. Knowledge is a process of discovery that can be attained through skilful questioning. For example, a particular tree, with a branch or two missing, possibly alive, possibly dead, and with the initials of two lovers carved into its bark, is distinct from the abstract form of tree-ness. A tree is the ideal that each of us holds that allows us to identify the imperfect reflections of trees all around us. Platonism is considered to be in mathematics departments all over the world, regarding the predominant philosophy of mathematics as the foundations of mathematics. One statement of this philosophy is the thesis that mathematics is not created but discovered. The absence in this thesis is of clear distinction between mathematical and non-mathematical creation that leaves open the Plato believed in the importance of state involvement in education and in moving individuals from concrete to abstract thinking. He believed that individual differences exist and that outstanding people should be rewarded for their knowledge. With this thinking came the view that girls and boys should have equal opportunities for education. In Plato’s utopian society there were three social classes of education: workers, military personnel, and rulers. He believed that the ruler or king would be a good person with much wisdom because it was only ignorance that led to evil. 7.1.2 Religious Idealism: Augustine Religion and idealism are closely attached. Judaism, the originator of Christianity, and Christianity were influenced by many of the Greek philosophers that hold idealism strongly. Saint Augustine of Hippo, a bishop, a confessor, a doctor of the church, and one of the great thinkers of the Catholic Church discussed the universe as being divided into the City of The City of This parallels Plato’s scheme of the world of ideas and the world of matter. Religious thinkers believed that man did not create knowledge, but discovered it. Augustine, like Plato did not believe that one person could teach another. Instead, they must be led to understanding through skilful questioning. Religious idealists see individuals as creations of God who have souls and contain elements of godliness that need to be developed. Augustine was connected the philosophy of Platonists and Neo-Platonist with Christianity. For instance, he saw the World of Ideas as the City of According to Ozmon & Craver, 2008 today one can see the tremendous influence religious idealism has had on American education. Early Christians implemented the idea of systematic teaching, which was used consistently throughout new and established schools. Many Greek and Jewish ideas about the nature of humanity were taught. For centuries, the Christian church educated generations with Idealist philosophy. In addition, idealism and the Judeo-Christian religion were unified in European culture by the Middle Ages and thereafter. Augustine was also very influential in the history of education where he introduced the theory of three different types of students and instructed teachers to adapt their teaching styles to each student's individual learning style. The three different kinds of students are: (a) The student who has been well-educated by knowledgeable teachers; (b) The student who has had no education; and (c) The student who has had a poor education, but believes himself to be well educated. If a student has been well educated in a wide variety of subjects, the teacher must be careful not to repeat what they have already learned, but to challenge the student with material which they do not yet know thoroughly. With the student who has had no education, the teacher must be patient, willing to repeat things until the student understands and sympathetic. Perhaps the most difficult student, however, is the one with an inferior education who believes he understands something when he does not. Augustine stressed the importance of showing this type of student the difference between having words and having understanding and of helping the student to remain humble with his acquisition of knowledge. An additional fundamental idea which Augustine introduced is the idea of teachers responding positively to the questions they may receive from their students, no matter if the student interrupted his teacher. Augustine also founded the controlled style of teaching. This teaching style ensures the student’s full understanding of a concept because the teacher does not bombard the student with too much material; focuses on one topic at a time; helps them discover what they don't understand, rather than moving on too quickly; anticipates questions; and helps them learn to solve difficulties and find solutions to problems. In a nutshell, Augustine claimed there are two basic styles a teacher uses when speaking to the students: (i) The mixed style includes complex and sometimes showy language to help students see the beautiful artistry of the subject they are studying; and (ii) The grand style is not quite as elegant as the mixed style, but is exciting and heartfelt, with the purpose of igniting the same passion in the students’ hearts. Augustine balanced his teaching philosophy with the traditional bible-based practice of strict discipline where he agreed with using punishment as an incentive for children to learn. Augustine believed all people tend toward evil, and students must therefore be physically punished when they allow their evil desires to direct their actions. Identify and explain the aims, content, and the methods of education based on the educational philosophy of Aristotle. 7.1.3 Modern Idealism: Rene Descartes, Immanuel Kant, and Friedrich Hegel By the beginning of the modern period in the fifteenth and sixteenth centuries, idealism has become to be largely identified with systematization and subjectivism. Some major features of modern idealism are: (a) Belief that reality includes, in addition to the physical universe, that which transcends it, is superior to it, and which is eternal. This ultimate reality is non-physical and is best characterized by the term mind; (b) Physical realities draw their meaning from the transcendent realities to which they are related; (c) That which is distinctive of human nature is mind. Mind is more than the physical entity, brain; (d) Human life has a predetermined purpose. It is to become more like the transcendent mind; (e) Man's purpose is fulfilled by development of the intellect and is referred to as self-realization; (f) Ultimate reality includes absolute values; (g) Knowledge comes through the application of reason to sense experience. In so far as the physical world reflects the transcendent world, we can determine the nature of the transcendent; and (h) Learning is a personal process of developing the potential within. It is not conditioning or pouring in facts, but it is self-realization. Learning is a process of discovery. The identification of modern idealism was encouraged by the writings and thoughts of Renè Descartes, Immanuel Kant, and Georg Wilhelm Friedrich Hegel. (i) René Descartes Descartes, a French philosopher, was born in the town of mathematics. In 1614, he studied civil and cannon law at mathematics. In 1614, he studied civil and cannon law at In 1637, he published geometry, in which his combination of algebra and geometry gave birth to analytical geometry, known as Cartesian Geometry. But the most important contribution Descartes made was his philosophical writings. Descartes was convinced that science and mathematics could be used to explain everything in nature, so he was the first to describe the physical universe in terms of matter and motion - seeing the universe a as giant mathematically designed engine. Descartes wrote three important texts: Discourse on Method of rightly conducting the reason and seeking truth in the sciences, "Meditations on First Philosophy and A Principles of Philosophy” . In his Discourse on Method, he attempts to arrive at a fundamental set of principles that one can know as true without any doubt. To achieve this, he employs a method called metaphysical doubt, sometimes also referred to as methodological skepticism wne he rejects any ideas that can be doubted, and then re-establishes them in order to acquire a firm foundation for genuine knowledge. Initially, Descartes arrives at only a single principle - thought exists: „thought cannot be separated from me, therefore, I exist. Most famously, this is known as cogito ergo sum where it means I think, therefore I am. Therefore, Descartes concluded, if he doubted, then something or someone must be doing the doubting; therefore the very fact that he doubted proved his existence. Descartes decides that he can be certain that he exists because he thinks as he perceives his body through the use of the senses; however, these have previously been proven unreliable. Hence, Descartes assumes that the only indubitable knowledge is that he is a thinking thing. Thinking is his essence as it is the only thing about him that cannot be doubted. Descartes defines thought or cogitatio as what happens in me such that I am immediately conscious of it, insofar as I am conscious of it. Thinking is thus every activity of a person of which he is immediately conscious. (ii) Immanuel Kant Immanuel Kant, one of the world’s great philosopher, was born in the East Prussian city of Königsberg, Germany studied at its schools and university, and worked there as a tutor and professor for more than forty years. He had never In writing his Critique of Pure Reason and Critique of Practical Reason, Kant tried to make sense of rationalism and empiricism within the idealist philosophy. In his system, individuals could have a valid knowledge of human experience that was established by the scientific laws of nature. The Critique of Pure Reason spells out the conditions for mathematical, scientific, and metaphysical knowledge in its Transcendental Aesthetic, Transcendental Analytic, and Transcendental Dialectic. Carefully distinguishing judgments as analytic or synthetic and as a priori or a posteriori, Kant held that the most interesting and useful varieties of human knowledge rely upon synthetic a priori judgments, which are, in turn, possible only when the mind determines the conditions of its own experience. Thus, it is we who impose the forms of space and time upon all possible sensation in mathematics, and it is we who render all experience coherent as scientific knowledge governed by traditional notions of substance and causality by applying the pure concepts of the understanding to all possible experience. However, regulative principles of this sort hold only for the world as we know it, and since metaphysical propositions seek a truth beyond all experience, they cannot be established within the bounds of reason. In Critique of Practical Reason, Kant grounded the conception of moral autonomy upon our postulation of God, freedom, and immortality. Kant’s philosophy of education involved some aspects of character education. He believed in the importance of treating each person as an end and not as a means. He thought that education should include training in discipline, culture, discretion, and moral training. Teaching children to think and an emphasis on duty toward self and others were also vital points in his philosophies. Teaching a child to think is associated closely with Kant’s notion of will, and the education of will means living according to the duties flowing the categorical imperatives. Kant’s idealism is based on his concentration on thought processes and the nature of relationship between mind and its objects on the one hand and universal moral ideas on the other. With these systematic thoughts it has greatly influenced all subsequent Western philosophy, idealistic, and other wise. (iii) Georg Wilhelm Friedrich Hegel George Wilhelm Friedrich Hegel, German philosopher, is one the creators of German idealism. He was born in Hegel developed a comprehensive philosophical framework, or system, to account in an integrated and developmental way for the relation of mind and nature, the subject and object of knowledge, and psychology, the state, history, art, religion, and philosophy. In particular, he developed a concept of mind or spirit that manifested itself in a set of contradictions and oppositions that it ultimately integrated and united, such as those between nature and freedom, and immanence and transcendence, without eliminating either pole or reducing it to the other. However, Hegel most influential conceptions are of speculative logic or dialectic, absolute idealism, absolute spirit, negativity, sublation, the master / slave dialectic, ethical life, and the importance of history. Hegelianism is a collective term for schools of thought following Hegel’s philosophy which can be summed up by the saying that the rational alone is real, which means that all reality is capable of being expressed in rational categories. His goal was to reduce reality to a more synthetic unity within the system of transcendental idealism. In fact, one major feature of the Hegelian system is movement towards richer, more complex, and more complete synthesis. Three of Hegel’s most famous books are Phenomenology of Mind, Logic, and Philosophy of Right. In these books, Hegel emphasizes three major aspects: logic, nature, and spirit. Hegel maintained that if his logical system were applied accurately, one would arrive at the Absolute Ideas, which is similar to Plato’s unchanging ideas. However, the difference is that Hegel was sensitive to change where change, development, and movement are all central and necessary in Nature was considered to be the opposite of the Absolute Ideas. Ideas and nature together form the Absolute Spirit which is manifested by history, art, religion, and philosophy. Hegel’s idealism is in the search for final Absolute Spirit. Examining any one thing required examining or referring to another thing. Hegel’s thinking is not as prominent as it once was because his system led to the glorification of the state at the expense of individuals. Hegel thought that to be truly educated an individual must pass through various stages of the cultural evolution of mankind. Additionally, he reasoned that it was possible for some individuals to know everything essential in the history of humanity. The far reaching influence of Hegel is due in a measure to the undoubted vastness of the scheme of philosophical synthesis which he conceived and partly realized. A philosophy which undertook to organize under the single formula of triadic development every department of knowledge, from abstract logic up to the philosophy of history, has a great deal of attractiveness to those who are metaphysically inclined. Hegel’s philosophy is the highest expression of that spirit of collectivism which characterized the nineteenth century. In theology, Hegel revolutionized the methods of inquiry. The application of his notion of development to biblical criticism and to historical investigation is obvious to anyone who compares the spirit and purpose of contemporary theology with the spirit and purpose of the theological literature of the first half of the nineteenth century. In science, as well, and in literature, the substitution of the category of becoming for the category of being is a very patent fact, and is due to the influence of Hegel's method. In political economy and political science the effect of Hegel's collectivistic conception of the „state‰ supplanted to a large extent the individualistic conception which was handed down from the eighteenth century to the nineteenth. Hegel also had considerable influence on the philosophy and theory of education. He appeared to think that to be truly educated, an individual must pass through the various stages of the cultural evolution of humankind. This idea can be much applies to the development of science and technology. For instance, to a person who lived 300 years ago, electricity was unknown except as a natural occurrence, such as lightning. Then again, today, practically everyone depends on the electrical power for everyday use and has a working, practical knowledge of it entirely outside the experience of a person from the past. A contemporary person can easily learn elementary facts about electricity in a relatively short time; that is he or she can pass through or learn an extremely important phase of our cultural evolution simply due to a passing of time. Finally, in short, in Hegel’s philosophical education, he believed that only mind is real and that human thought, through participation in the universal spirit, progresses toward a destined ideal by a dialectical process of resolving opposites through synthesis. 112/126 According to Ozmon and Craver (2008) the most central thread of realism is the Realists believe that the study of ideas can be enhanced by the study of material More generally, realism is any philosophical theory that emphasizes the existence the term stands for the theory that there is a reality quite independent To understand this complex philosophy, one must examine its development and Betrand Russell have contributed much to realism ideology. 7.2.1 Aristotle Realism Aristotle (384 - 322 B.C.E.), a great Greek philosopher, was a child of to a Aristotle believed that the world could be understood at a fundamental level As a result of this belief, Aristotle literally wrote about everything: poetics, Aristotle was the first person to asserts that nature is understandable. This Aristotelian Realism that ideas, such as the idea of God or the idea of a tree, can exist without matter, but matter cannot exist without form. In order to get to form, it was necessary to study material things. As a result, Aristotle used syllogism, which is a process of „ordering statements about reality in a logical, systematic form (Ozmon & Craver, 2008). This systematic form would include a major premise, a minor premise, and a All men are mortal. Socrates is a man; Therefore, Socrates is mortal. Aristotle described the relation between form and matter with the Four Causes: (a) Material cause - the matter from which something is made; (b) Formal cause - the design that shapes the material object; (c) Efficient cause - the agent that produces the object; and (d) Final cause - the direction toward which the object is tending. Through these different forms, Aristotle demonstrated that matter was constantly in a process of change. He believed that God, the Ultimate Reality held all creation together. Organization was very important in Aristotle’s philosophy. It was his thought that human beings as rational creatures are fulfilling their purpose when they think and thinking are the highest characteristic. According to Aristotle, each thing had a purpose and education’s purpose was to develop the capacity for reasoning. Proper character was formed by following the Golden Mean, the The importance of education in the philosophy of Aristotle was enormous, since the individual man could learn to use his reason to arrive at virtue, happiness, and political harmony only through the process of education. For Aristotle, the purpose of education is to produce a good man. Man is not good by nature so he must learn to control his animal activities through the use of reason. Only when man behaves by habit and reason, according to his nature as a rational being, he is capable of happiness. In short, education must aim at the development of the 7.2.2 Religious Realism: Thomas Aquinas Saint Thomas Aquinas (1225 - 1274) was a priest of the Roman Catholic Church and Doctor Communis. He is frequently referred to as Thomas since Aquinas refers to his residence rather than his surname. He was the foremost classical proponent of natural theology and the father of the Thomistic school of philosophy and theology. The philosophy of Aquinas has exerted enormous influence on subsequent Christian theology, especially the Roman Catholic Church, and extending to Western philosophy in general. He stands as a vehicle and modifier of Aristotelianism, which he merged with the thought of Augustine. Aquinas believed that for the knowledge of any truth whatsoever man needs divine help, that the intellect may be moved by God to its act. Besides, he believed that human beings have the natural capacity to know many things without special divine revelation, even though such revelation occurs from time to time. Aquinas believed that truth is known through reason - the natural revelation and faith - the supernatural revelation. Supernatural revelation has its origin in the inspiration of the Holy Spirit and is made available through the teaching of the prophets, summed up in Holy Scripture, and transmitted by the Magisterium, the sum of which is called Tradition. On the other hand, natural revelation is the truth available to all people through their human nature where certain truths all men can attain from correct human reasoning. Thomism is the philosophical school that arose as a legacy of the work and thought of Thomas Aquinas where it is based on Summa Theologica meaning summary of theology. Summa Theologica is arguably second only to the Bible in importance to the Roman Catholic Church, written from 1265 to 1274 is the most famous work of Thomas Aquinas. Although the book was never finished, it was intended as a manual for beginners as a compilation of all of the main theological teachings of that time. It summarizes the reasoning for almost all points of Christian theology in the West. The Summa’s topics follow a cycle: (a) the existence of God; (b) God's creation; (d) Man's purpose; (f) The Sacraments; and (g) back to God. In these works, faith and reason are harmonized into a grand theologico-philosophical system which inspired the medieval philosophical tradition known as Thomism and which has been favored by the Roman Catholic church ever since. Aquinas made an important contribution to epistemology, recognizing the central part played by sense perception in human cognition. It is through the senses that we first become acquainted with existent, material things. Thomas Moreover, in the Summa Theologica, Aquinas records his famous five ways which seek to prove the existence of God from the facts of change, causation, contingency, variation and purpose. These cosmological and teleological arguments can be neatly expressed in syllogistic form as below: (i) Way 1 • The world is in motion or motus. • All changes in the world are due to some prior cause. • There must be a prior cause for this entire sequence of changes, that is, God. (ii) Way 2 • The world is a sequence of events. • Every event in the world has a cause. • There must be a cause for the entire sequence of events, that is, God. (iii) Way 3 • The world might not have been. • Everything that exists in the world depends on some other thing for its existence. • The world itself must depend upon some other thing for its existence, that is, God. (iv) Way 4 • There are degrees of perfection in the world. • Things are more perfect the closer they approach the maximum. • There is a maximum perfection, that is, God. (v) Way 5 • Each body has a natural tendency towards its goal. • All order requires a designer. • This end-directedness of natural bodies must have a designing force behind it. Therefore each natural body has a designer, that is, God. Thomas Aquinas tried to balance the philosophy of Aristotle with Christian ideas. He believed that truth was passed to humans by God through divine revelation, and that humans had the ability to seek out truth. Unlike Aristotle, Aquinas realism came to the forefront because he held that human reality is not only spiritual or mental but also physical and natural. From the standpoint of a human teacher, the path to the soul lies through the physical senses, and education must use this path to accomplish learning. Proper instruction thus directs the learner to knowledge that leads to true being by progressing from a In view of education, Aquinas believed that the primary agencies of education are the family and the church; the state -or organized society - runs a poor third; the family and the church have an obligation to teach those things that relate to the unchanging principles of moral and divine law. In fact, Aquinas mentioned that the mother is the child’s first teacher, and because the child is molded easily; it is the mother’s role to set the child’s moral tone; the church stands for the source of knowledge of the divine and should set the grounds for understanding God’s law. The state should formulate and enforce law on education, but it should not abridge the educational primacy of the home and church. 7.2.3 Modern Realism: Francis Bacon and John Locke Modern realism began to develop because classical realism did not adequately include a method of inductive thinking. If the original premise or truth was incorrect, then there was a possibility of error in the logic of the rest of the thinking. Modern realists therefore believed that a process of deduction must be used to explain ideas. Of all the philosophers engaged in this effort, the two most outstanding did Francis Bacon and John Locke; where they were involved in developing systematic methods of thinking and ways to increase human understanding. (a) Francis Bacon Bacon (1561 - 1626) was an English philosopher, statesman, scientist, lawyer, jurist, and author. He also served as a politician in the courts of Elizabeth I and James I. He was not a successful in his political efforts, but his record in the philosophical thought remained extremely The Novum Organum is a philosophical work by Francis Bacon published in 1620. This is a reference to Aristotle's work Organon, which was his treatise on logic and syllogism. In Novum Organum, Bacon details a new system of logic he believes to be superior to the old ways of syllogism of Aristotle. In this work, we see the development of the Baconian Method, consisting of procedures for isolating the form, nature or cause of a phenomenon, employing the method of agreement, method of difference, and method of associated variation. Bacon felt that the problem with religious realism was that it began with dogma or belief and then worked toward deducing conclusions. He felt that science could not work with this process because it was inappropriate and ineffective for the scientific process to begin with preconceived ideas. Bacon felt that developing effective means of inquiry was vital because knowledge was power that could be used to deal effectively with life. He therefore devised the inductive method of acquiring knowledge which begins with observations and then uses reasoning to make general statements or laws. Verification was needed before a judgment could be made. When data was collected, if contradictions were found, then the ideas would be discarded. The Baconian Method consists of procedures for isolating the form nature, or cause, of a phenomenon, including the method of agreement, method of difference, and method of concomitant or associated variation. Bacon suggests that we draw up a list of all things in which the phenomenon we are trying to explain occurs, as well as a list of things in which it does not occur. Then, we rank the lists according to the degree in which the phenomenon occurs in each one. After that, we should be able to deduce what factors match the occurrence of the phenomenon in one list and do not occur in the other list, and also what factors change in accordance with the way the data had been ranked. From this, Bacon concludes that we should be able to deduce by elimination and inductive reasoning what is the cause underlying the phenomenon. of the scientific or inductive approach uncover many errors in propositions that were taken for granted originally. Bacon urged that people should re-examine all previously accepted knowledge. At the least, he considered that people should attempt to get rid off the various idols in their mind before which they bow down and that cloud their thinking. Bacon (i) Idols of the Tribe (Idola Tribus): This is humans' tendency to perceive more order and regularity in systems than truly exists, and is due to people following their preconceived ideas about things. (ii) Idols of the Cave or Den (Idola Specus): This is due to individuals' personal weaknesses in reasoning due to particular personalities, likes and dislikes. For instance, a woman had several bad experiences with men with moustaches, thus she might conclude that all moustached men are bad; this is a clear case of faulty generalization. (iii) Idols of the Marketplace (Idola Fori): This is due to confusions in the use of language and taking some words in science to have a different meaning than their common usage. For example, such words as liberal and conservative might have little meaning when applied to people because a person could be liberal on one issue and conservative on another. (iv) Idols of the Theatre (Idola Theatri): This is due to using philosophical systems which have incorporated mistaken methods. Bacon insisted on housekeeping of the mind, in which we should break away from the dead ideas of the past and begin again by using the method of induction. Bacon did not propose an actual philosophy, but rather a method of developing philosophy. He wrote that, although philosophy at the time used the deductive syllogism to interpret nature, the philosopher should instead proceed through inductive reasoning from fact to axiom to law. (b) John Locke John Locke (1632 - 1704) was an English philosopher. Locke is considered the first of the British empiricists. His ideas had enormous influence on the development of epistemology and political philosophy, and he is widely regarded as one of the most influential Enlightenment thinkers, classical republicans, and contributors to liberal theory. Surprisingly, Locke’s writings influenced Voltaire and Rousseau, many Scottish Enlightenment thinkers, as well as the American revolutionaries. This influence is reflected in the American Declaration of Independence. Thoughts Concerning Education is a 1693 discourse on education written by John Locke. For over a century, it was the most important philosophical work on Locke’s Essay Concerning Human Understanding, wrote in 1690, Locke outlined a new theory of mind, contending that the child's mind was a tabula rasa or blank slate or empty mind; that is, it did not contain any innate or inborn ideas. In describing the mind in these terms, Locke was drawing on Theatetus, which suggests that the mind is like a wax tablet. Although Locke argued vigorously for the rasa theory of mind, he nevertheless did believe in innate talents and interests. For example, he advises parents to watch their children carefully in order to discover their aptitudes, and to nurture their children's own interests rather than force them to participate in activities which they dislike. John Locke believed that the mind was a blank slate at birth; information and knowledge were added through experience, perception, and reflection. He felt that what we know is what we experience. Locke believed Another Locke most important contribution to eighteenth-century educational theory also stems from his theory of the self. He writes: the little and almost insensible impressions on our tender infancies have very important and lasting consequences. That is, the associations of ideas made when young are more significant than those made when mature because they are the foundation of the self - they mark the tabula rasa. 7.2.4 Contemporary Realism: Alfred North Whitehead and Bertrand Russell Contemporary realism developed around the twentieth century due to concerns with science and scientific problems of a philosophical nature (Ozmon and Carver, 2008). Two outstanding figures in the twentieth century of contemporary realism were Alfred Norton Whitehead and Bertrand Russell. (a) Alfred North Whitehead North Whitehead (1861 - 1947) was an English mathematician who became a philosopher. He wrote on algebra, logic, foundations of mathematics, philosophy of science, physics, metaphysics, and education. He co-authored the epochal Principia Mathematica with Bertrand Russell. While Thomas Aquinas tried to balance the ideas of Aristotle with the ideas Principia Mathematica is a three - volume work on the foundations of mathematics, written by Alfred North Whitehead's philosophical influence can be felt in all three of the main areas in which he worked - logic and the foundations of mathematics, the philosophy of science, and metaphysics, as well as in other areas such as ethics, education and religion. Whitehead was interested in actively utilizing the knowledge and skills that were taught to students to a particular end. He believed we should aim at producing men who possess both culture and expert knowledge in some special direction. He even thought that, education has to impart an intimate sense for the power and beauty of ideas coupled with structure for ideas together with a particular body of knowledge, which has peculiar reference to the life of being possessing it. (b) Bertrand Arthur William Russell Arthur William Russell, a British mathematician and philosopher had embraced materialism in his early writing career. Russell earned his reputation as a distinguished thinker by his work in mathematics and logic. In 1903 he published„The Principles of Mathematics and by 1913 he and Alfred North Whitehead had published the three volumes of Principia Mathematica. The research, which Russell did during this period, establishes him as one of the founding fathers of modern analytical philosophy; discussing towards mathematical quantification as the basis of philosophical appears to have discovered his paradox in the late spring of 1901, while working on his Principles of Mathematics of 1903. Russell's paradox is the most famous of the logical or set-theoretical paradoxes. The paradox arises within naive set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself, hence the paradox. For instance, some sets, such as the set of all teacups, are not members of themselves; other sets, such as the set of all non-teacups, are members of themselves. If we call the set of all sets that are not members of themselves: R. If R is a member of itself, then by definition it must not be a member of itself. Similarly, if R is not a member of itself, then by definition it must be a member of itself. The paradox has prompted much work in logic, set theory and the philosophy and foundations of mathematics. The root of the word pragmatism is a Greek word meaning work. According to pragmatism, the truth or meaning of an idea or a proposition lies in its observable practical consequences rather than anything metaphysical. It can be summarized by the phrase whatever works, is likely true. Because reality changes, whatever works will also change - thus, truth must also be Pragmatism is also a practical, matter-of-fact way of approaching or assessing situations or of solving problems. However, we might wonder why people insist on doing things and using processes that do not work. Several true reasons for this to happened is because the weight of the customs and tradition, fear and apathy, and the fact that habitual ways of thinking and doing seem to work even though they have lost use in today's world. pragmatism as a philosophical movement began in the movement. The background of pragmatism can be found in the works of such people like Francis Bacon and John Locke. 7.3.1 Centrality of Experience: Francis Bacon and John Locke Human experience is an important ingredient of pragmatist philosophy. John Locke talked about the mind as a „tabula rasa‰ and the world of experience as the verification of thought, or in other words: the mind is a tabula rasa at birth; world of experience verifies thought. Another philosopher, Rousseau followed Locke's idea but with an expansion of the „centrality of experience‰ as the basis for a philosophical belief. Rousseau saw people as basically good but corrupted by civilization. If we would avoid that corruption then we should focus on the educational connection between nature and experience by building the education of our youth around the youth's natural inquisitiveness while attending to their physiological, psychological and, social developmental stages. Locke believed that as people have more experiences, they have more ideas imprinted on the mind and more with which to relate. However, he argued that one could have false ideas as well as true ones. The only way people can be sure of their ideas are correct is by verifying them in the world of experience, such as physical proof. Locke emphasized the idea of placing children in the most desirable environment for their education and pointed out the importance of environment in making people who they are. Nevertheless, notion of experience contained internal flaw and caused difficulties. His firmness that mind is a tabula rasa established mind as a passive, malleable instrument 7.3.2 Science and Society: Auguste Comte, Charles Darwin, and John Dewey Bridging the transition between the Age of Enlightenment and the Modern Age, Auguste Comte (1798 - 1857) and Charles Darwin (1809 1882) shared a belief that science could have a profound and positive effect on society. ComteÊs commitment to the use of science to address the ills of society resulted in the study of sociology. The effects of Charles Darwin and his five years aboard the HMS Beagle are still echoing throughout the world of religion and education. Basically, Comte talked on use of science to solve social problems in sociology and was very much influenced by John Dewey's (1859 1952) ideas regarding the role of science in society. While Darwin initiate „Origin of the Species‰; nature operates by process of development without predetermined directions or ends, reality not found in being but becoming, and promoted pragmatist view that education tied directly to biological and social development. Figure 7.12: From Left : Auguste Comte, Charles Darwin, and John Dewey Auguste Comte was a French philosopher and one of the founders of sociology and positivism. He is responsible for the coining and introduction of the term altruism. Altruism is an ethical doctrine that holds that individuals have a moral obligation to help, serve, or benefit others, if necessary at the sacrifice of self interest. Auguste Comte's version of altruism calls for living for the sake of others. One who holds to either of these ethics is known as an "altruist." One universal law that Comte saw at work in all sciences where he called it the law of three phases. It is by his statement of this law that he is best known in the English-speaking world; namely, that society has gone through three phases: theological, metaphysical, and scientific. In Comte's lifetime, his work was sometimes viewed skeptically, with perceptions that he had elevated positivism to a religion and had named himself the Pope of Positivism. emphasis on the interconnectedness of social elements was a forerunner of modern functionalism. His emphasis on a quantitative, mathematical basis for decision-making, remains with us today. It is a foundation of the modern notion of positivism, modern quantitative statistical analysis, and business decision making. His description of the continuing cyclical relationship between theory and practice is seen in modern business systems of Total Quality Management and Continuous Quality Improvement where advocates describe a Charles Darwin's wrote the On the Origin of Species, published in 1859, is a seminal work of scientific literature considered to be the foundation of evolutionary biology. The full title was On the Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life. For the sixth edition of 1872, the short title was changed to The Origin of Species. Darwin's book introduced the theory that populations evolve over the course of generations through a process of natural selection, and presented a body of evidence that the diversity of life arose through a branching pattern of evolution and common descent. He included evidence that he had accumulated on the voyage of the Beagle in the 1830s, and his subsequent findings from research, correspondence, and experimentation. evolutionary ideas had already been proposed to explain new findings in biology. There was growing support for such ideas among protester anatomists and the general public, but during the first half of the 19th century the English scientific establishment was closely tied to the Church of England, while science was part of natural theology. Ideas about the transmutation of species were controversial as they conflicted with the beliefs that species were unchanging parts of a designed hierarchy and that humans were unique, unrelated to animals. The political and theological implications were intensely debated, but transmutation was not accepted by the scientific mainstream. The book was written to be read by non-specialists and attracted widespread interest on its publication. As On the other hand, Dewey attempted to create a philosophy that captured and reflected the influences of the contemporary world on the preparation of the future leaders through the educational system. The reliance on the source of knowledge has to be tempered by an understanding of the societal effects if the learning was to be meaningful, beneficial, or productive. John Dewey discussed the Nature of Experience; experience and nature are not two different things separated from each other, rather experience itself is of nature : experience is and Dewey viewed method, rather than abstract answer, as a central concern, thought that modern industrial society has submerged both individuality and sociality. He defined individuality as the interplay of personal choice and freedom with objective condition. Whereas sociality refers to milieu or medium conducive to individual development. Moreover, Dewey believed that most religions have a negative effect because they tend to classify people. Dewey thought that two schools of social and religious reform exist: one holds that people must be constantly watched, guided and controlled to see that they stay on the right path and the other holds that people will control their own actions intelligently. Dewey also believed that a truly aesthetic experience is one in which people are unified with their activity. Finally, Dewey stated that we should project art into all human activities, such as, the art of politics and the art of education. (a) How is pragmatism similar and different from idealism and realism? Explain. (b) Discuss your thoughts about why pragmatism is seen as most effective in a democratic society. (c) Compare and contrast Dewey's philosophical thoughts with your society's approach and your own. REALISM, AND PRAGMATISM AND ITS CRITIQUE IN Developing a philosophical perspective on education is not easy. However, it is very important if a person wants to become a more effective professional educator. A sound philosophical perspective helps one sees the interaction among students, curriculum, and aims and goals of education of various type of philosophy in achieving a teacher personal and professional undertakings. 7.4.1 Idealism in Philosophy of Education Idealism as a philosophy had its greatest impact during the nineteenth century. Its influence in today's world is less important than it has been in the past. Much of what we know as idealism today was influenced by German ideas of idealism. The main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as „idea-ism‰. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. Table 7.1 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for idealism in philosophy of education: Table7.1: Idealism in Philosophy of Education 7.4.2 Realism in Philosophy of Education According to Ozmon and Craver (2008) „the central thread of realism is the principal of independence.‰ The world of ideas and matter defined in idealism by Plato and Socrates do not exist separately and apart from each other for realists. They contend that material things can exist whether or not there is a human being around to appreciate or perceive them. Table 7.2 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for realism in philosophy of education: Table 7.2: Realism in Philosophy of Education 7.4.3 Pragmatism in Philosophy of Education is basically an American philosophy, but has its roots in European thinking. Pragmatists believe that ideas are tools that can be used to cope with the world. They believe that educators should seek out new process, incorporate traditional and contemporary ideas, or create new ideas to deal with the changing world. There is a great deal of stress placed on sensitivity to consequences, but are quick to state that consideration should be given to the method of arriving at the consequences. The means to solving a problem is as important as the end. The scientific method is important in the thinking process for pragmatists, but it was not to seem like sterile lab thinking. Pragmatists want to apply the scientific method for the greater good of the world. They believe that although science has caused many problems in our world, it can still be used to However, the progressive pragmatic movement believed in separating children by intelligence and ability in order to meet the needs of society. The softer side of that philosophy believed in giving children a great deal of freedom to explore, leading many people to label the philosophy of pragmatism in education as permissive. Table 7.3 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for pragmatism in philosophy of education: Table 7.3: Pragmatism Realism in Philosophy of Education Which of the philosophy is most compatible with your beliefs as an educator? Why? • Basically, there three general or world philosophies that are idealism, realism, and pragmatism. • Idealism is the philosophical theory that maintains that the ultimate nature of reality is based on mind or ideas. It holds that the so-called external or „real world‰ is inseparable from mind, consciousness, or perception. • Platonic idealism says that there exists a perfect realm of form and ideas and our world merely contains shadows of that realm; only ideas can be known or have any reality. • Religious idealism argues that all knowledge originates in perceived phenomena which have been organized by categories. • Modern idealism says that all objects are identical with some idea and the ideal knowledge is itself the system of ideas. • Platonic idealism usually refers to Plato's theory of forms or doctrine of ideas. Plato held the realm of ideas to be absolute reality. Plato's method was the dialectic method all thinking begins with a thesis; as exemplified in the Socratic dialogues. discussed the universe as being divided into the City of • Augustine believed that faith based knowledge is determined by the church and all true knowledge came from God. • Descartes was convinced that science and mathematics could be used to explain everything in nature, so he was the first to describe the physical universe in terms of matter and motion - seeing the universe a as giant mathematically designed engine. • Kant held that the most interesting and useful varieties of human knowledge rely upon synthetic a priori judgments, which are, in turn, possible only when the mind determines the conditions of its own experience. • Kant's philosophy of education involved some aspects of character education. He believed in the importance of treating each person as an end and not as a means. • Hegel developed a concept of mind or spirit that manifested itself in a set of contradictions and oppositions that it ultimately integrated and united, such as those between nature and freedom, and immanence and transcendence, without eliminating either pole or reducing it to the other. • „Hegelianism‰ is a collective term for schools of thought following Hegel's philosophy which can be summed up by the saying that „the rational alone is real‰, which means that all reality is capable of being expressed in rational categories. • The most central thread of realism is the principal or thesis of independence. This thesis holds that reality, knowledge, and value exist independently of the human mind. • Aristotle believed that the world could be understood at a fundamental level through the detailed observation and cataloguing of phenomenon. • Aquinas believed that truth is known through reason - the natural revelation and faith - the supernatural revelation. • Thomism is the philosophical school that arose as a legacy of the work and thought of Thomas Aquinas where it is based on Summa Theologica meaning „summary of theology‰. • Aquinas mentioned that the mother is the child's first teacher, and because the child is molded easily; it is the mother's role to set the child's moral tone; the church stands for the source of knowledge of the divine and should set the grounds for understanding God's law. The state should formulate and enforce law on education. • Bacon devised the inductive method of acquiring knowledge which begins with observations and then uses reasoning to make general statements or laws. Verification was needed before a judgment could be made. When data was collected, if contradictions were found, then the ideas would be discarded. „Baconian Method‰ consists of procedures for isolating the form nature, or cause, of a phenomenon, including the method of agreement, method of difference, and method of concomitant or associated identified the „idols‰, called the Idols of The Mind where he described these as things which obstructed the path of correct scientific • John Locke sought to explain how we develop knowledge. He attempted a rather modest philosophical task: „to clear the ground of some of the rubbish‰ that deter people from gaining knowledge. He was trying to do away with thought of what Bacon called „idols‰. • Locke outlined a new theory of mind, contending that the child's mind was a „tabula rasa‰ or „blank slate‰ or „empty mind‰; that is, it did not contain any innate or inborn ideas. • Whitehead was interested in actively „utilising the knowledge and skills that were taught to students to a particular end‰. He believed we should aim at „producing men who possess both culture and expert knowledge in some special direction‰. • Russell, one of the founding fathers of modern analytical philosophy; discussing towards mathematical quantification as the basis of philosophical generalization. • Russell's paradox is the most famous of the logical or set-theoretical paradoxes. The paradox arises within naive set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself, hence the paradox. • Pragmatism is a practical, matter-of-fact way of approaching or assessing situations or of solving problems. • Human experience is an important ingredient of pragmatist philosophy. • John Locke talked about the mind as a „tabula rasa‰ and the world of experience as the verification of thought, or in other words: the mind is a tabula rasa at birth; world of experience verifies thought. followed Locke's idea but with an expansion of the „centrality of experience‰ as the basis for a philosophical belief. Rousseau saw people as basically good but corrupted by civilization. If we would avoid that corruption then we should focus on the educational connection between nature and experience by building the education of our youth around the • Locke believed that as people have more experiences, they have more ideas imprinted on the mind and more with which to relate. • Comte is responsible for the coining and introduction of the term altruism. Altruism is an ethical doctrine that holds that individuals have a moral obligation to help, serve, or benefit others, if necessary at the sacrifice of self interest. • One universal law that Comte saw at work in all sciences where he called it the „law of three phases‰. It is by his statement of this law that he is best known in the English-speaking world; namely, that society has gone through three phases: theological, metaphysical, and scientific. • Dewey attempted to create a philosophy that captured and reflected the influences of the contemporary world on the preparation of the future leaders through the educational system. The reliance on the source of knowledge has to be tempered by an understanding of the societal effects if the learning was to be meaningful, beneficial, or productive. • John Dewey discussed the Nature of Experience; experience and nature are not two different things separated from each other, rather experience itself is of nature : experience is and of nature. • Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. • The world of ideas and matter defined in idealism by Plato and Socrates do not exist separately and apart from each other for realists. They contend that material things can exist whether or not there is a human being around to appreciate or perceive them. • They believe that educators should seek out new process, incorporate traditional and contemporary ideas, or create new ideas to deal with the changing world. Democracy and education. Experience and education. treatises of government, ed. Peter Laslett. Locke, J. 1975 . An essay concerning human understanding, ed. P. H.Nidditch. Locke, J. 1989 . Some thoughts concerning education, ed. John W. Yolton and Jean S. Yolton. Some thoughts concerning education; and of the conduct of the understanding, ed. Ruth W. Grant and Nathan Tarcov. Philosophy of education. Ozmon, H.A. & Craver, S.M. (2008). Philosophical foundations of education (8th Turner, W. (1910). Philosophy of Immanuel Kant. In The Catholic Encyclopedia. Arthur G. (1966). John Dewey as educator: His design for work in education (1894-1904). Bohac, P., (2001, February 6). Dewey's pragmatism. Chapter 4 Pragmatism and Education. Retrieved September 3, 2009, from http://www.brendawelch.com/uwf/pragmatism.pdf Created on Nov 12, 2010 and edited last 13 November, 2010 by Pengendali@2006
http://www.kheru2006.webs.com/4idealism_realism_and_pragmatigsm_in_education.htm
13
15
Creative Debate is a role-playing exercise. Students assume a specific point of view and debate a controversial topic from this perspective. Creative Debates promote both critical thinking and tolerance of opposing views. Steps to Creative Debate: Discuss the rules for debate with the class. Have students suggest guidelines. Once a consensus is reached, post the rules for quick reference. Suggest a topic for debate or allow the students to select a topic. If the topic requires research, allow the students to gather and organize information before the debate. Divide the class into three groups. Select two groups to participate in the debate. The third group acts as observers. Rearrange the classroom so that opposing groups face one another and the observers sit to the side. Provide a reading selection that states one of the positions on the debate topic. Assign one group to argue for the selection; the other group argues against. Each student selects a character from the past or present that represents their position in the debate. (Teachers may want to suggest a list of characters to speed up this process.) Have each student introduce himself as the character to the class and then argue the topic from the perspective of this character. Encourage students to "act out" the character's personality (speech patterns, mannerisms, etc.). Each group presents their positions for ten minutes. Allow extra time for rebuttals. Next, ask the student teams to switch their positions and argue the opposing viewpoint. (Perhaps the group of observers might change places with one of the other groups.) Repeat the debate and rebuttal process. At the end of the debate, ask students to reflect on their experiences. Raise questions like . . . Did you find it difficult to argue from both perspectives in the debate? What did you learn from this experience? Did your own views and opinions change? How would you approach a similar debate in the future?
http://www.readingeducator.com/strategies/debate.htm
13
21
Logical Reasoning is our guide to good decisions. It is also a guide to sorting out truth from falsehood. Like every subject, Logic has its own vocabulary and it is important that you understand the meanings of some important words/terms on which problems are usually framed in the Common Admission Test. Once you have become familiar with the vocabulary of Logic, it will be imperative that you also understand some rules/principles on which questions can be solved. Some of the important types and styles of problems in logic are: a. Problems based on ‘Propositions and their Implications’ These problems typically have a proposition followed by either a deductive or an inductive argument. An argument has a minimum of two statements — a premise and a conclusion, in any order.It may also have more than one premise (statement) and more than one conclusion. The information in the premise(s) either makes the conclusion weak or makes it strong. The examinee is usually required to: i. identify the position of the premise(s) vis-à-vis the conclusion, that is, is the premise weakening or strengthening the conclusion ii. identify if the conclusion drawn based on the premise(s) is correct or incorrect iii. identify if only one conclusion follows, either conclusion follows, or neither conclusion follows, or both the conclusions follow (assuming the problem has two premises and two conclusions) iv. identify an option in which the third statement is implied by the first two statements; this type of question is called a — Syllogism v. identify the correct ordered pair where the first statement implies the second statement and both these statements are logically consistent with the main proposition (assuming, each question has a main proposition followed by four statements A, B, C, D) vi. identify the set in which the statements are most logically related (assuming, each question has six statements and there are four options listed as —sets of combinations of three statements ABD, ECF, ABF, BCE etc.) vii. identify the option where the third segment can be logically deduced from the preceding two (assuming, each question has a set of four statements and each of these statements has three segment, for example: A. Tedd is beautiful; Robo is beautiful too; Ted is Robo. B. Some apples are guavas; Some guavas are oranges; Oranges are apples. C. Tedd is beautiful; Robo is beautiful too; Tedd may be Robo. D. Apples are guavas; Guavas are oranges; Oranges are grapes. (a) Only C (b) Only A (c) A and C (d) Only B The answer to the above question is option (c) The above is in no way an exhaustive list of problems on logic, but it gives a fair view of the types and styles of questions that one may face.
http://www.jagranjosh.com/articles/cat-logical-reasoning-format-syllabus-and-types-of-problem-1338317908-1
13
25
The NCTE Committee on Critical Thinking and the Language Arts defines critical thinking as "a process which stresses an attitude of suspended judgment, incorporates logical inquiry and problem solving, and leads to an evaluative decision or action." In a new monograph copublished by the ERIC Clearinghouse on Reading and Communication Skills, Siegel and Carey (1989) emphasize the roles of signs, reflection, and skepticism in this process. Ennis (1987) suggests that "critical thinking is reasonable, reflective thinking that is focused on deciding what to believe or do." However defined, critical thinking refers to a way of reasoning that demands adequate support for one's beliefs and an unwillingness to be persuaded unless the support is forthcoming. Why should we be concerned about critical thinking in our classrooms? Obviously, we want to educate citizens whose decisions and choices will be based on careful, critical thinking. Maintaining the right of free choice itself may depend on the ability to think clearly. Yet, we have been bombarded with a series of national reports which claim that "Johnny can't think" (Mullis, 1983; Gardner, 1983; Action for Excellence, 1983). All of them call for schools to guide students in developing the higher level thinking skills necessary for an informed society. Skills needed to begin to think about issues and problems do not suddenly appear in our students (Tama, 1986; 1989). Teachers who have attempted to incorporate higher level questioning in their discussions or have administered test items demanding some thought rather than just recall from their students are usually dismayed at the preliminary results. Unless the students have been prepared for the change in expectations, both the students and the teacher are likely to experience frustration. What is needed to cultivate these skills in the classroom? A number of researchers claim that the classroom must nurture an environment providing modeling, rehearsal, and coaching, for students and teachers alike, to develop a capacity for informed judgments (Brown, 1984; Hayes and Alvermann, 1986). Hayes and Alvermann report that this coaching led teachers to acknowledge students' remarks more frequently and to respond to the students more elaborately. It significantly increased the proportion of text-connected talk students used as support for their ideas and/or as cited sources of their information. In addition, students' talk became more inferential and analytical. A summary of the literature on the role of "wait time," (the time a teacher allows for a student to respond as well as the time an instructor waits after a student replies) found that it had an impact on students' thinking (Tobin, 1987). In this review of studies, Tobin found that those teachers who allowed a 3-5 second pause between the question and response permitted students to produce cognitively complex discourse. Teachers who consciously managed the duration of pauses after their questioning and provided regular intervals of silence during explanation created an environment where thinking was expected and practiced. However, Tobin concludes that "wait time" in and of itself does not insure critical thinking. A curriculum which provides students with the opportunity to develop thinking skills must be in place. Interestingly, Tobin found that high achievers consistently were permitted more wait time than were less skilled students, ndicating that teachers need to monitor and evaluate their own behavior while using such strategies. Finally, teachers need to become more tolerant of "conflict," or confrontation, in the classroom. They need to raise issues which create dissonance and refrain from expressing their own bias, letting the students debate and resolve problems. Although content area classroom which encourages critical thinking can promote a kind of some psychological discomfort in some students as conflicting accounts of information and ideas are argued and debated, such feelings may motivate them to resolve an issue (Festinger, 1957). They need to get a feel for the debate and the conflict it involves. Isn't there ample everyday evidence of this: Donahue, Geraldo Rivera, USA Today? Authors like Frager (1984) and Johnson and Johnson (1979) claim that to really engage in critical thinking, students must encounter the dissonance of conflicting ideas. Dissonance, as discussed by Festinger, 1957 promotes a psychological discomfort which occurs in the presence of an inconsistency and motivates students to resolve the issue. To help students develop skills in resolving this dissonance, Frager (1984) offers a model for conducting critical thinking classes and provides samples of popular issues that promote it: for example, banning smoking in public places, the bias infused in some sports accounts, and historical incidents written from both American and Russian perspectives. If teachers feel that their concept of thinking is instructionally useful, if they develop the materials necessary for promoting this thinking, and if they practice the procedures necessary, then the use of critical thinking activities in the classroom will produce positive results. Matthew Lipman (1988) writes, "The improvement of student thinking--from ordinary thinking to good thinking--depends heavily upon students' ability to identify and cite good reasons for their opinions." Training students to do critical thinking is not an easy task. Teaching which involves higher level cognitive processes, comprehension, inference, and decision making often proves problematic for students. Such instruction is often associated with delays in the progress of a lesson, with low success and completion rates, and even with direct negotiations by students to alter the demands of work (Doyle, 1985). This negotiation by students is understandable. They have made a career of passive learning. When met by instructional situations in which they may have to use some mental energies, some students resist that intellectual effort. What emerges is what Sizer (1984) calls "conspiracy for the least," an agreement by the teacher and students to do just enough to get by. Despite the difficulties, many teachers are now promoting critical thinking in the classroom. They are nurturing this change from ordinary thinking to good thinking admirably. They are 1) promoting critical thinking by infusing instruction with opportunities for their students to read widely, to write, and to discuss; 2) frequently using course tasks and assignments to focus on an issue, question, or problem; and 3) promoting metacognitive attention to thinking so that students develop a growing awareness of the relationship of thinking to reading, writing, speaking, and listening. (See Tama, 1989.) Another new ERIC/RCS and NCTE monograph (Neilsen, 1989) echoes similar advice, urging teachers to allow learners to be actively involved in the learning process, to provide consequential contexts for learning, to arrange a supportive learning environment that respects student opinions while giving enough direction to ensure their relevance to a topic, and to provide ample opportunities for learners to collaborate. Action for Excellence. A Comprehensive Plan to Improve Our Nation's Schools. Denver: Education Commission of the States, 1983. 60pp. [ED 235 588] Brown, Ann L. "Teaching students to think as they read: Implications for curriculum reform." Paper commissioned by the American Educational Research Association Task Force on Excellence in Education, October 1984. 42pp. [ED 273 567] Doyle, Walter. "Recent research on classroom management: Implications for teacher preparation." Journal of Teacher Education, 36 (3), 1985, pp. 31-35. Ennis, Robert. "A taxonomy of critical thinking dispositions and abilities." In Joan Baron and Robert Sternberg (Eds.) Teaching Thinking Skills: Theory and Practice. New York: W.H. Freeman, 1987. Festinger, Leon. A Theory of Cognitive Dissonance. Evanston, Illinois: Row Peterson, 1957. Frager, Alan. "Conflict: The key to critical reading instruction." Paper presented at annual meeting of The Ohio Council of the International Reading Association Conference, Columbus, Ohio, October 1984. 18pp. [ED 251 806] Gardner, David P., et al. A Nation at Risk: The Imperative for Educational Reform. An Open Letter to the American People. A Report to the Nation and the Secretary of Education. Washington, DC: National Commission on Excellence in Education, 1983. 72pp. [ED 226 006] Hayes, David A., and Alvermann, Donna E. "Video assisted coaching of textbook discussion skills: Its impact on critical reading behavior." Paper presented at the annual meeting of the American Research Association, San Francisco: April 1986. 11pp. [ED 271 734] Johnson, David W., and Johnson, Roger T. "Conflict in the classroom: Controversy and learning," Review of Educational Research, 49, (1), Winter 1979, pp. 51-70. Lipman, Matthew. "Critical thinking--What can it be?" Educational Leadership, 46 (1), September 1988, pp. 38-43. Mullis, Ina V. S., and Mead, Nancy. "How well can students read and write?" Issuegram 9. Denver: Education Commission of the States, 1983. 9pp. [ED 234 352] Neilsen, Allan R., Critical Thinking and Reading: Empowering Learners to Think and Act. Monographs on Teaching Critical Thinking, Number 2. Bloomington, Indiana: ERIC Clearinghouse on Reading and Communication Skills and The National Council of Teachers of English, Urbana, Illinois, 1989. [Available from ERIC/RCS and NCTE.] Siegel, Marjorie, and Carey, Robert F. Critical Thinking: A Semiotic Perspective. Monographs on Teaching Critical Thinking, Number 1. Bloomington, Indiana: ERIC Clearinghouse on Reading and Communication Skills and the National Council of Teachers of English, Urbana, Illinois, 1989. [Available from ERIC/RCS and NCTE.] Sizer, Theodore. Horace's Compromise: The Dilemma of the American High School. Boston: Houghton-Mifflin, 1984. [ED 264 171; not available from EDRS.] Tama, M. Carrol. "Critical thinking has a place in every classroom," Journal of Reading 33 (1), October 1989. Tama, M. Carrol. "Thinking skills: A return to the content area classroom." Paper presented at the annual meeting of the International Reading Association, 1986. 19pp. [ED 271 737] Tobin, Kenneth. "The role of wait time in higher cognitive level learning," Review of Educational Research, 57 (1), Spring 1987, pp. 69-95.
http://www.ericdigests.org/pre-9211/critical.htm
13
37
"Visualization of the quicksort algorithm. The horizontal lines are pivot values.\n|Worst case perfo(...TRUNCATED)
http://en.wikipedia.org/wiki/Quicksort
13
Downloads last month
12
Edit dataset card