score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.09375
Temporal range: Early Eocene to present |Eastern screech owl| some 25, see text Striginae sensu Sibley & Ahlquist The true owls or typical owls (family Strigidae) are one of the two generally accepted families of owls, the other being the barn owls (Tytonidae). The Sibley-Ahlquist taxonomy unites the Caprimulgiformes with the owl order; here, the typical owls are a subfamily Striginae. This is unsupported by more recent research (see Cypselomorphae for details), but the relationships of the owls in general are still unresolved. This large family comprises around 189 living species in 25 genera. The typical owls have a cosmopolitan distribution and are found on every continent except Antarctica. While typical owls (hereafter referred to simply as owls) vary greatly in size, with the smallest species, the elf owl, being a hundredth the size of the largest, the Eurasian eagle-owl and Blakiston's fish owl, owls generally share an extremely similar body plan. They tend to have large heads, short tails, cryptic plumage and round facial discs around the eyes. The family is generally arboreal (with a few exceptions like the burrowing owl) and obtain their food on the wing. The wings are large, broad, rounded and long. Like for other birds of prey, in many owl species females are larger than males. Because of their nocturnal habits they tend not to exhibit sexual dimorphism in their plumage. The feathers are soft and the base of each is downy, allowing for silent flight. The toes and tarsus are feathered in some species, and more so in species at higher latitudes. Numerous species of owl in the genus Glaucidium and the northern hawk-owl have eye patches on the backs of their heads, apparently to convince other birds they are being watched at all times. Numerous nocturnal species have ear-tufts, feathers on the sides of the head that are thought to have a camouflage function, breaking up the outline of a roosting bird. The feathers of the facial disc are arranged in order to increase sound delivered to the ears. Hearing in owls is highly sensitive and the ears are asymmetrical allowing the owl to localise a sound. In addition to hearing owls have massive eyes relative to their body size. Contrary to popular belief, however, owls cannot see well in extreme dark and are able to see fine in the day. Owls are generally nocturnal and spend much of the day roosting. They are often perceived as tame since they will allow people to approach quite closely before taking flight, but they are instead attempting to avoid detection. The cryptic plumage and inconspicuous locations adopted are an effort to avoid predators and mobbing by small birds. - Genus Megascops – screech-owls, some 20 species - Genus Otus – scops-owls; probably paraphyletic, about 45 species - Genus Pyrroglaux – Palau owl - Genus Margarobyas – bare-legged owl or Cuban screech-owl - Genus Ptilopsis – white-faced owls, 2 species - Genus Mimizuku – giant scops-owl or Mindanao eagle-owl - Genus Bubo – horned owls, eagle-owls and fish-owls; paraphyletic with Nyctea, Ketupa and Scotopelia, some 25 species - Genus Strix – earless owls, some 19 species, including 4 that were previously classified as Ciccaba - Genus Ciccaba – the 4 species have been transferred to Strix - Genus Lophostrix – crested owl - Genus Jubula – maned owl - Genus Pulsatrix – spectacled owls, 3 species - Genus Surnia – northern hawk-owl - Genus Glaucidium – pygmy owls, about 30–35 species - Genus Xenoglaux – long-whiskered owlet - Genus Micrathene – elf owl - Genus Athene – 2–4 species (depending on whether Speotyto and Heteroglaux are included or not) - Genus Aegolius – saw-whet owls, 4 species - Genus Ninox – Australasian hawk-owls, some 20 species - Genus Uroglaux – Papuan hawk-owl - Genus Pseudoscops – Jamaican owl and possibly striped owl - Genus Asio – eared owls, 6–7 species - Genus Nesasio – fearful owl - Genus Mascarenotus – Mascarene owls, 3 species (extinct c. 1850) - Genus Sceloglaux – laughing owl (extinct 1914?) - Genus Grallistrix – stilt-owls, 4 species - Genus Ornimegalonyx – Caribbean giant owls, 1–2 species - Cuban giant owl, Ornimegalonxy oteroi - Ornimegalonyx sp. – probably subspecies of O. oteroi - Genus Asphaltoglaux - Mioglaux (Late Oligocene? – Early Miocene of WC Europe) – includes "Bubo" poirreiri - Intulula (Early/Middle Miocene of WC Europe) – includes "Strix/Ninox" brevis - Alasio (Middle Miocene of Vieux-Collonges, France) – includes "Strix" collongensis - "Otus/Strix" wintershofensis – fossil (Early/Middle Miocene of Wintershof West, Germany) – may be close to extant genus Ninox - "Strix" edwardsi – fossil (Middle Miocene of Grive-Saint-Alban, France) - "Asio" pygmaeus – fossil (Early Pliocene of Odessa, Ukraine) - Strigidae gen. et sp. indet. UMMP V31030 (Rexroad Late Pliocene of Kansas, USA) – Strix/Bubo? - Ibiza owl, Strigidae gen. et sp. indet. – prehistoric (Late Pleistocene/Holocene of Es Pouàs, Ibiza) The supposed fossil heron "Ardea" lignitum (Late Pliocene of Germany) was apparently a strigid owl, possibly close to Bubo. The Early–Middle Eocene genus Palaeoglaux from west-central Europe is sometimes placed here, but given its age it is probably better considered its own family for the time being. - Marks, J. S.; Cannings, R.J. and Mikkola, H. (1999). "Family Strigidae (Typical Owls)". In del Hoyo, J.; Elliot, A. & Sargatal, J. (eds.) (1999). Handbook of the Birds of the World. Volume 5: Barn-Owls to Hummingbirds. Lynx Edicions. ISBN 84-87334-25-3 - Earhart, Caroline M. and Johnson, Ned K. (1970). "Size Dimorphism and Food Habits of North American Owls". Condor 72 (3): 251–264. doi:10.2307/1366002. - Kelso L & Kelso E (1936). "The Relation of Feathering of Feet of American Owls to Humidity of Environment and to Life Zones". Auk 53 (1): 51–56. doi:10.2307/4077355. - Olson, p. 131 - Feduccia, J. Alan; Ford, Norman L. (1970). "Some birds of prey from the Upper Pliocene of Kansas" (PDF). Auk 87 (4): 795–797. doi:10.2307/4083714. - Sánchez Marco, Antonio (2004). "Avian zoogeographical patterns during the Quaternary in the Mediterranean region and paleoclimatic interpretation" (PDF). Ardeola 51 (1): 91–132. - Olson, p. 167 - Olson, Storrs L. (1985). The fossil record of birds. In: Farner, D.S.; King, J.R. & Parkes, Kenneth C. (eds.): Avian Biology 8: 79–238. Academic Press, New York. |Wikimedia Commons has media related to Strigidae.|
https://en.wikipedia.org/wiki/Strigidae
4
We take our ABC's for granted, learning 26 letters in a precise order from our youngest days. When introduced to a second or third language later in life we may realize that even similar tongues to English contain slightly different alphabets--the Spanish ñ, the French ç--despite the fact that they evolved from the same roots. Historical variation in the English alphabet seems largely glossed over in contemporary education, but identifying some of the "missing letters" can help explain a few historical puzzles. First, there's ampersand, considered the 27th letter of the English alphabet until about 150 years ago. It's name comes from its position at the end of the ABC's: The word “ampersand” came many years later when “&” was actually part of the English alphabet. In the early 1800s, school children reciting their ABCs concluded the alphabet with the &. It would have been confusing to say “X, Y, Z, and.” Rather, the students said, “and per se and.” “Per se” means “by itself,” so the students were essentially saying, “X, Y, Z, and by itself and.” Over time, “and per se and” was slurred together into the word we use today: ampersand. When a word comes about from a mistaken pronunciation, it’s called a mondegreen. Before the introduction of the Latin alphabet after the Roman conquest of Britain, Anglo-Saxon had an alphabet all its own known as furthorc. In the ensuing battle of cultural power politics Anglo-Saxon lost out. Collateral damage included the letter "thorn," pictured at right, pronounced with the hard "th" sound. It was replaced by the humble Y, always ready to do double duty in that ambiguous no-man's-land between consonants and vowels. This explains the anachronistic use of Y in titles like "Ye Olde English Shoppe"--it's just another spelling of "the." On Friday we'll take a look at another missing letter, the long s (resembling "f"). For a sneak peek and a list of nine other extinct English letters, check out this article from MentalFloss (via @johndcook).
http://mattdickenson.com/2013/02/20/micro-institutions-everywhere-the-english-alphabet/
4.0625
Slavery in ancient Greece Slavery was a very common practice in Ancient Greek history, as in other places of the time. It is estimated that the majority of Athenian citizens owned at least one slave; most ancient writers considered slavery natural and even necessary. This paradigm was notably questioned in Socratic dialogues; the Stoics produced the first recorded condemnation of slavery. Modern historiographical practice distinguishes chattel (personal possession) slavery from land-bonded groups such as the penestae of Thessaly or the Spartan helots, who were more like medieval serfs (an enhancement to real estate). The chattel slave is an individual deprived of liberty and forced to submit to an owner, who may buy, sell or lease them like any other chattel. The academic study of slavery in ancient Greece is beset by significant methodological problems. Documentation is disjointed and very fragmented, focusing primarily on Athens. No treatises are specifically devoted to the subject, and jurisprudence was interested in slavery only inasmuch as it provided a source of revenue. Comedies and tragedies represented stereotypes while iconography made no substantial differentiation between slaves and craftsmen. - 1 Terminology - 2 Origins of slavery - 3 Economic role - 4 Demographics - 5 Status of slaves - 6 Slavery conditions - 7 Views of Greek slavery - 8 Notes - 9 References - 10 Further reading - 11 External links The ancient Greeks had several words for slaves, which leads to textual ambiguity when they are studied out of their proper context. In Homer, Hesiod and Theognis of Megara, the slave was called δμώς / dmōs. The term has a general meaning but refers particularly to war prisoners taken as booty (in other words, property). During the classical period, the Greeks frequently used ἀνδράποδον / andrapodon, (literally, "one with the feet of a man") as opposed to τετράποδον / tetrapodon, "quadruped", or livestock. The most common word is δοῦλος / doulos, used in opposition to "free man" (ἐλεύθερος / eleútheros); an earlier form of the former appears in Mycenaean inscriptions as do-e-ro, "male slave" (or "servant", "bondman"; Linear B: 𐀈𐀁𐀫), or do-e-ra, "female slave" (or "maid-servant", "bondwoman"; Linear B: ). The verb δουλεὐω (which survives in Modern Greek, meaning "work") can be used metaphorically for other forms of dominion, as of one city over another or parents over their children. Finally, the term οἰκέτης / oiketēs was used, meaning "one who lives in house", referring to household servants. Other terms used were less precise and required context: - θεράπων / therapōn – At the time of Homer, the word meant "squire" (Patroclus was referred to as the therapōn of Achilles and Meriones that of Idomeneus); during the classical age, it meant "servant". - ἀκόλουθος / akolouthos – literally, "the follower" or "the one who accompanies". Also, the diminutive ἀκολουθίσκος, used for page boys. - παῖς / pais – literally "child", used in the same way as "houseboy", also used in a derogatory way to call adult slaves. - σῶμα / sōma – literally "body", used in the context of emancipation. Origins of slavery Slaves were present through the Mycenaean civilization, as documented in numerous tablets unearthed in Pylos 140. Two legal categories can be distinguished: "slaves (εοιο)" and "slaves of the god (θεοιο)", the god in this case probably being Poseidon. Slaves of the god are always mentioned by name and own their own land; their legal status is close to that of freemen. The nature and origin of their bond to the divinity is unclear. The names of common slaves show that some of them came from Kythera, Chios, Lemnos or Halicarnassus and were probably enslaved as a result of piracy. The tablets indicate that unions between slaves and freemen were common and that slaves could work and own land. It appears that the major division in Mycenaean civilization was not between a free individual and a slave but rather if the individual was in the palace. · There is no continuity between the Mycenaean era and the time of Homer, where social structures reflected those of the Greek dark ages. The terminology differs: the slave is no longer do-e-ro (doulos) but dmōs. In the Iliad, slaves are mainly women taken as booty of war, while men were either ransomed or killed on the battlefield. In the Odyssey, the slaves also seem to be mostly women. These slaves were servants and sometimes concubines. There were some male slaves, especially in the Odyssey, a prime example being the swineherd Eumaeus. The slave was distinctive in being a member of the core part of the oikos ("family unit", "household"): Laertes eats and drinks with his servants; in the winter, he sleeps in their company. The term dmōs is not considered pejorative, and Eumaeus, the "divine" swineherd, bears the same Homeric epithet as the Greek heroes. Slavery remained, however, a disgrace. Eumaeus himself declares, "Zeus, of the far-borne voice, takes away the half of a man's virtue, when the day of slavery comes upon him". · It is difficult to determine when slave trading began in the archaic period. In Works and Days (8th century BC), Hesiod owns numerous dmōes although their status is unclear. The presence of douloi is confirmed by lyric poets such as Archilochus or Theognis of Megara. According to epigraphic evidence, the homicide law of Draco (c. 620 BC) mentioned slaves. According to Plutarch, Solon (c. 594-593 BC) forbade slaves from practising gymnastics and pederasty. By the end of the period, references become more common. Slavery becomes prevalent at the very moment when Solon establishes the basis for Athenian democracy. Classical scholar Moses Finley likewise remarks that Chios, which, according to Theopompus, was the first city to organize a slave trade, also enjoyed an early democratic process (in the 6th century BC). He concludes that "one aspect of Greek history, in short, is the advance hand in hand, of freedom and slavery." All activities were open to slaves with the exception of politics. For the Greeks, politics was the only activity worthy of a citizen, the rest being relegated wherever possible to non-citizens. It was status that was of importance, not activity. The principal use of slavery was in agriculture, the foundation of the Greek economy. Some small landowners might own one slave, or even two. An abundant literature of manuals for landowners (such as the Economy of Xenophon or that of Pseudo-Aristotle) confirms the presence of dozens of slaves on the larger estates; they could be common labourers or foremen. The extent to which slaves were used as a labour force in farming is disputed. It is certain that rural slavery was very common in Athens, and that ancient Greece did not know of the immense slave populations found on the Roman latifundia. Slave labour was prevalent in mines and quarries, which had large slave populations, often leased out by rich private citizens. The strategos Nicias leased a thousand slaves to the silver mines of Laurium in Attica; Hipponicos, 600; and Philomidès, 300. Xenophon indicates that they received one obolus per slave per day, amounting to 60 drachmas per year. This was one of the most prized investments for Athenians. The number of slaves working in the Laurium mines or in the mills processing ore has been estimated at 30,000. Xenophon suggested that the city buy a large number of slaves, up to three state slaves per citizen, so that their leasing would assure the upkeep of all the citizens. Slaves were also used as craftsmen and tradespersons. As in agriculture, they were used for labour that was beyond the capability of the family. The slave population was greatest in workshops: the shield factory of Lysias employed 120 slaves, and the father of Demosthenes owned 32 cutlers and 20 bedmakers. Slaves were also employed in the home. The domestic's main role was to stand in for his master at his trade and to accompany him on trips. In time of war he was batman to the hoplite. The female slave carried out domestic tasks, in particular bread baking and textile making. Only the poorest citizens did not possess a domestic slave. It is difficult to estimate the number of slaves in ancient Greece, given the lack of a precise census and variations in definitions during that era. It is certain that Athens had the largest slave population, with as many as 80,000 in the 6th and 5th centuries BC, on average three or four slaves per household. In the 5th century BC, Thucydides remarked on the desertion of 20,890 slaves during the war of Decelea, mostly tradesmen. The lowest estimate, of 20,000 slaves, during the time of Demosthenes, corresponds to one slave per family. Between 317 BC and 307 BC, the tyrant Demetrius Phalereus ordered a general census of Attica, which arrived at the following figures: 21,000 citizens, 10,000 metics and 400,000 slaves. The orator Hypereides, in his Against Areistogiton, recalls that the effort to enlist 15,000 male slaves of military age led to the defeat of the Southern Greeks at the Battle of Chaeronea (338 BC), which corresponds to the figures of Ctesicles. According to the literature, it appears that the majority of free Athenians owned at least one slave. Aristophanes, in Plutus, portrays poor peasants who have several slaves; Aristotle defines a house as containing freemen and slaves. Conversely, not owning even one slave was a clear sign of poverty. In the celebrated discourse of Lysias For the Invalid, a cripple pleading for a pension explains "my income is very small and now I'm required to do these things myself and do not even have the means to purchase a slave who can do these things for me." However, the huge slave populations of the Romans were unknown in ancient Greece. When Athenaeus cites the case of Mnason, friend of Aristotle and owner of a thousand slaves, this appears to be exceptional. Plato, owner of five slaves at the time of his death, describes the very rich as owning 50 slaves. Sources of supply There were four primary sources of slaves: war, in which the defeated would become slaves to the victorious unless a more objective outcome was reached; piracy (at sea); banditry (on land); and international trade. By the rules of war of the period, the victor possessed absolute rights over the vanquished, whether they were soldiers or not. Enslavement, while not systematic, was common practice. Thucydides recalls that 7,000 inhabitants of Hyccara in Sicily were taken prisoner by Nicias and sold for 120 talents in the neighbouring village of Catania. Likewise in 348 BC the population of Olynthus was reduced to slavery, as was that of Thebes in 335 BC by Alexander the Great and that of Mantineia by the Achaean League. The existence of Greek slaves was a constant source of discomfort for free Greeks. The enslavement of cities was also a controversial practice. Some generals refused, such as the Spartans Agesilaus II and Callicratidas. Some cities passed accords to forbid the practice: in the middle of the 3rd century BC, Miletus agreed not to reduce any free Knossian to slavery, and vice versa. Conversely, the emancipation by ransom of a city that had been entirely reduced to slavery carried great prestige: Cassander, in 316 BC, restored Thebes. Before him, Philip II of Macedon enslaved and then emancipated Stageira. Piracy and banditry Piracy and banditry provided a significant and consistent supply of slaves, though the significance of this source varied according to era and region. Pirates and brigands would demand ransom whenever the status of their catch warranted it. Whenever ransom was not paid or not warranted, captives would be sold to a trafficker. In certain areas, piracy was practically a national specialty, described by Thucydides as "the old-fashioned" way of life. Such was the case in Acarnania, Crete, and Aetolia. Outside of Greece, this was also the case with Illyrians, Phoenicians, and Etruscans. During the Hellenistic period, Cilicians and the mountain peoples from the coasts of Anatolia could also be added to the list. Strabo explains the popularity of the practice among the Cilicians by its profitability; Delos, not far away, allowed for "moving a myriad of slaves daily". The growing influence of the Roman Republic, a large consumer of slaves, led to development of the market and an aggravation of piracy. In the 1st century BC, however, the Romans largely eradicated piracy to protect the Mediterranean trade routes. There was slave trade between kingdoms and states of the wider region. The fragmentary list of slaves confiscated from the property of the mutilators of the Hermai mentions 32 slaves whose origin have been ascertained: 13 came from Thrace, 7 from Caria, and the others came from Cappadocia, Scythia, Phrygia, Lydia, Syria, Ilyria, Macedon and Peloponnese. Local professionals sold their own people to Greek slave merchants. The principal centres of the slave trade appear to have been Ephesus, Byzantium, and even faraway Tanais at the mouth of the Don. Some "barbarian" slaves were victims of war or localised piracy, but others were sold by their parents. There is a lack of direct evidence of slave traffic, but corroborating evidence exists. Firstly, certain nationalities are consistently and significantly represented in the slave population, such as the corps of Scythian archers employed by Athens as a police force—originally 300, but eventually nearly a thousand. Secondly, the names given to slaves in the comedies often had a geographical link; thus Thratta, used by Aristophanes in The Wasps, The Acharnians, and Peace, simply signified Thracian woman. Finally, the nationality of a slave was a significant criterion for major purchasers; the ancient advice was not to concentrate too many slaves of the same origin in the same place, in order to limit the risk of revolt. It is also probable that, as with the Romans, certain nationalities were considered more productive as slaves than others. The price of slaves varied in accordance with their ability. Xenophon valued a Laurion miner at 180 drachmas; while a workman at major works was paid one drachma per day. Demosthenes' father's cutlers were valued at 500 to 600 drachmas each. Price was also a function of the quantity of slaves available; in the 4th century BC they were abundant and it was thus a buyer's market. A tax on sale revenues was levied by the market cities. For instance, a large slave market was organized during the festivities at the temple of Apollo at Actium. The Acarnanian League, which was in charge of the logistics, received half of the tax proceeds, the other half going to the city of Anactorion, of which Actium was a part. Buyers enjoyed a guarantee against latent defects; the transaction could be invalidated if the bought slave turned out to be crippled and the buyer had not been warned about it. Curiously, it appears that the Greeks did not "breed" their slaves, at least during the Classical Era, though the proportion of houseborn slaves appears to have been rather large in Ptolemaic Egypt and in manumission inscriptions at Delphi. Sometimes the cause of this was natural; mines, for instance, were exclusively a male domain. On the other hand, there were many female domestic slaves. The example of African slaves in the American South on the other hand demonstrates that slave populations can multiply. This incongruity remains relatively unexplained. Xenophon advised that male and female slaves should be lodged separately, that "…nor children born and bred by our domestics without our knowledge and consent—no unimportant matter, since, if the act of rearing children tends to make good servants still more loyally disposed, cohabiting but sharpens ingenuity for mischief in the bad." The explanation is perhaps economic; even a skilled slave was cheap, so it may have been cheaper to purchase a slave than to raise one. Additionally, childbirth placed the slave-mother's life at risk, and the baby was not guaranteed to survive to adulthood. Houseborn slaves (oikogeneis) often constituted a privileged class. They were, for example, entrusted to take the children to school; they were "pedagogues" in the first sense of the term. Some of them were the offspring of the master of the house, but in most cities, notably Athens, a child inherited the status of its mother. Status of slaves The Greeks had many degrees of enslavement. There was a multitude of categories, ranging from free citizen to chattel slave, and including Penestae or helots, disenfranchised citizens, freedmen, bastards, and metics. The common ground was the deprivation of civic rights. - had no rights - Right to own property - Authority over the work of another - Power of punishment over another - Legal rights and duties (liability to arrest and/or arbitrary punishment, or to litigate) - Familial rights and privileges (marriage, inheritance, etc.) - Possibility of social mobility (manumission or emancipation, access to citizen rights) - Religious rights and obligations - Military rights and obligations (military service as servant, heavy or light soldier, or sailor) Athenian slaves were the property of their master (or of the state), who could dispose of them as he saw fit. He could give, sell, rent, or bequeath them. A slave could have a spouse and children, but the slave family was not recognized by the state, and the master could scatter the family members at any time. Slaves had fewer judicial rights than citizens and were represented by their master in all judicial proceedings. A misdemeanour that would result in a fine for the free man would result in a flogging for the slave; the ratio seems to have been one lash for one drachma. With several minor exceptions, the testimony of a slave was not admissible except under torture. Slaves were tortured in trials because they often remained loyal to their master. A famous example of trusty slave was Themistocles's Persian slave Sicinnus (the counterpart of Ephialtes of Trachis), who, despite his Persian origin, betrayed Xerxes and helped Athenians in the Battle of Salamis. Despite torture in trials, the Athenian slave was protected in an indirect way: if he was mistreated, the master could initiate litigation for damages and interest (δίκη βλάβης / dikē blabēs). Conversely, a master who excessively mistreated a slave could be prosecuted by any citizen (γραφὴ ὕβρεως / graphē hybreōs); this was not enacted for the sake of the slave, but to avoid violent excess (ὕβρις / hubris). Isocrates claimed that "not even the most worthless slave can be put to death without trial"; the master's power over his slave was not absolute. Draco's law apparently punished with death the murder of a slave; the underlying principle was: "was the crime such that, if it became more widespread, it would do serious harm to society?" The suit that could be brought against a slave's killer was not a suit for damages, as would be the case for the killing of cattle, but a δίκη φονική (dikē phonikē), demanding punishment for the religious pollution brought by the shedding of blood. In the 4th century BC, the suspect was judged by the Palladion, a court which had jurisdiction over unintentional homicide; the imposed penalty seems to have been more than a fine but less than death—maybe exile, as was the case in the murder of a Metic. However, slaves did belong to their master's household. A newly-bought slave was welcomed with nuts and fruits, just like a newly-wed wife. Slaves took part in most of the civic and family cults; they were expressly invited to join the banquet of the Choes, second day of the Anthesteria, and were allowed initiation into the Eleusinian Mysteries. A slave could claim asylum in a temple or at an altar, just like a free man. The slaves shared the gods of their masters and could keep their own religious customs if any. Slaves could not own property, but their masters often let them save up to purchase their freedom, and records survive of slaves operating businesses by themselves, making only a fixed tax-payment to their masters. Athens also had a law forbidding the striking of slaves: if a person struck what appeared to be a slave in Athens, that person might find himself hitting a fellow-citizen, because many citizens dressed no better. It astonished other Greeks that Athenians tolerated back-chat from slaves. Athenian slaves fought together with Athenian freemen at the battle of Marathon, and the monuments memorialize them. It was formally decreed before the battle of Salamis that the citizens should "save themselves, their women, children, and slaves". Slaves had special sexual restrictions and obligations. For example, a slave could not engage free boys in pederastic relationships ("A slave shall not be the lover of a free boy nor follow after him, or else he shall receive fifty blows of the public lash."), and they were forbidden from the palaestrae ("A slave shall not take exercise or anoint himself in the wrestling-schools."). Both laws are attributed to Solon. Fathers wanting to protect their sons from unwanted advances provided them with a slave guard, called a paidagogos, to escort the boy in his travels. The sons of vanquished foes would be enslaved and often forced to work in male brothels, as in the case of Phaedo of Elis, who at the request of Socrates was bought and freed from such an enterprise by the philosopher's rich friends. On the other hand, it is attested in sources that the rape of slaves was persecuted, at least occasionally. Slaves in Gortyn In Gortyn, in Crete, according to a code engraved in stone dating to the 6th century BC, slaves (doulos or oikeus) found themselves in a state of great dependence. Their children belonged to the master. The master was responsible for all their offences, and, inversely, he received amends for crimes committed against his slaves by others. In the Gortyn code, where all punishment was monetary, fines were doubled for slaves committing a misdemeanour or felony. Conversely, an offence committed against a slave was much less expensive than an offence committed against a free person. As an example, the rape of a free woman by a slave was punishable by a fine of 200 staters (400 drachms), while the rape of a non-virgin slave by another slave brought a fine of only one obolus (a sixth of a drachm). Slaves did have the right to possess a house and livestock, which could be transmitted to descendants, as could clothing and household furnishings. Their family was recognized by law: they could marry, divorce, write a testament and inherit just like free men. A specific case: debt slavery Prior to its interdiction by Solon, Athenians practiced debt enslavement: a citizen incapable of paying his debts became "enslaved" to the creditor. The exact nature of this dependency is a much controversial issue among modern historians: was it truly slavery or another form of bondage? However, this issue primarily concerned those peasants known as "hektēmoroi" working leased land belonging to rich landowners and unable to pay their rents. In theory, those so enslaved would be liberated when their original debts were repaid. The system was developed with variants throughout the Near East and is cited in the Bible. Solon put an end to it with the σεισάχθεια / seisachtheia, liberation of debts, which prevented all claim to the person by the debtor and forbade the sale of free Athenians, including by themselves. Aristotle in his Constitution of the Athenians quotes one of Solon's poems: And many a man whom fraud or law had sold Far from his god-built land, an outcast slave, I brought again to Athens; yea, and some, Exiles from home through debt’s oppressive load, Speaking no more the dear Athenian tongue, But wandering far and wide, I brought again; And those that here in vilest slavery (douleia) Crouched ‘neath a master’s (despōtes) frown, I set them free. Though much of Solon's vocabulary is that of "traditional" slavery, servitude for debt was at least different in that the enslaved Athenian remained an Athenian, dependent on another Athenian, in his place of birth. It is this aspect which explains the great wave of discontent with slavery of the 6th century BC, which was not intended to free all slaves but only those enslaved by debt. The reforms of Solon left two exceptions: the guardian of an unmarried woman who had lost her virginity had the right to sell her as a slave, and a citizen could "expose" (abandon) unwanted newborn children. The practice of manumission is confirmed to have existed in Chios from the 6th century BC. It probably dates back to an earlier period, as it was an oral procedure. Informal emancipations are also confirmed in the classical period. It was sufficient to have witnesses, who would escort the citizen to a public emancipation of his slave, either at the theatre or before a public tribunal. This practice was outlawed in Athens in the middle of the 6th century BC to avoid public disorder. The practice became more common in the 4th century BC and gave rise to inscriptions in stone which have been recovered from shrines such as Delphi and Dodona. They primarily date to the 2nd and 1st centuries BC, and the 1st century AD. Collective manumission was possible; an example is known from the 2nd century BC in the island of Thasos. It probably took place during a period of war as a reward for the slaves' loyalty, but in most cases the documentation deals with a voluntary act on the part of the master (predominantly male, but in the Hellenistic period also female). The slave was often required to pay for himself an amount at least equivalent to his street value. To this end they could use their savings or take a so-called "friendly" loan (ἔρανος / eranos) from their master, a friend or a client like the hetaera Neaira did. Emancipation was often of a religious nature, where the slave was considered to be "sold" to a deity, often Delphian Apollo, or was consecrated after his emancipation. The temple would receive a portion of the monetary transaction and would guarantee the contract. The manumission could also be entirely civil, in which case the magistrate played the role of the deity. The slave's freedom could be either total or partial, at the master's whim. In the former, the emancipated slave was legally protected against all attempts at re-enslavement—for instance, on the part of the former master's inheritors. In the latter case, the emancipated slave could be liable to a number of obligations to the former master. The most restrictive contract was the paramone, a type of enslavement of limited duration during which time the master retained practically absolute rights. In regard to the city, the emancipated slave was far from equal to a citizen by birth. He was liable to all types of obligations, as one can see from the proposals of Plato in The Laws: presentation three times monthly at the home of the former master, forbidden to become richer than him, etc. In fact, the status of emancipated slaves was similar to that of metics, the residing foreigners, who were free but did not enjoy a citizen's rights. Spartan citizens used helots, a dependent group collectively owned by the state. It is uncertain whether they had chattel slaves as well. There are mentions of people manumitted by Spartans, which was supposedly forbidden for helots, or sold outside of Lakonia: the poet Alcman; a Philoxenos from Cytherea, reputedly enslaved with all his fellow citizens when his city was conquered, later sold to an Athenian; a Spartan cook bought by Dionysius the Elder or by a king of Pontus, both versions being mentioned by Plutarch; and the famous Spartan nurses, much appreciated by Athenian parents. Some texts mention both slaves and helots, which seems to indicate that they were not the same thing. Plato in Alcibiades I cites "the ownership of slaves, and notably helots" among the Spartan riches, and Plutarch writes about "slaves and helots". Finally, according to Thucydides, the agreement that ended the 464 BC revolt of helots stated that any Messenian rebel who might hereafter be found within the Peloponnese was "to be the slave of his captor", which means that the ownership of chattel slaves was not illegal at that time. Most historians thus concur that chattel slaves were indeed used in the Greek city-state of Sparta, at least after the Lacedemonian victory of 404 BC against Athens, but not in great numbers and only among the upper classes. As was in the other Greek cities, chattel slaves could be purchased at the market or taken in war. It is difficult to appreciate the condition of Greek slaves. According to Aristotle, the daily routine of slaves could be summed up in three words: "work, discipline, and feeding". Xenophon's advice is to treat slaves as domestic animals, that is to say punish disobedience and reward good behaviour. For his part, Aristotle prefers to see slaves treated as children and to use not only orders but also recommendations, as the slave is capable of understanding reasons when they are explained. Greek literature abounds with scenes of slaves being flogged; it was a means of forcing them to work, as were control of rations, clothing, and rest. This violence could be meted out by the master or the supervisor, who was possibly also a slave. Thus, at the beginning of Aristophanes' The Knights (4–5), two slaves complain of being "bruised and thrashed without respite" by their new supervisor. However, Aristophanes himself cites what is a typical old saw in ancient Greek comedy: "He also dismissed those slaves who kept on running off, or deceiving someone, or getting whipped. They were always led out crying, so one of their fellow slaves could mock the bruises and ask then: 'Oh you poor miserable fellow, what's happened to your skin? Surely a huge army of lashes from a whip has fallen down on you and laid waste your back?'" The condition of slaves varied very much according to their status; the mine slaves of Laureion and the pornai (brothel prostitutes) lived a particularly brutal existence, while public slaves, craftsmen, tradesmen and bankers enjoyed relative independence. In return for a fee (ἀποφορά / apophora) paid to their master, they could live and work alone. They could thus earn some money on the side, sometimes enough to purchase their freedom. Potential emancipation was indeed a powerful motivator, though the real scale of this is difficult to estimate. Ancient writers considered that Attic slaves enjoyed a "peculiarly happy lot": Pseudo-Xenophon deplores the liberties taken by Athenian slaves: "as for the slaves and Metics of Athens, they take the greatest licence; you cannot just strike them, and they do not step aside to give you free passage". This alleged good treatment did not prevent 20,000 Athenian slaves from running away at the end of the Peloponnesian War at the incitement of the Spartan garrison at Attica in Decelea. These were principally skilled artisans (kheirotekhnai), probably among the better-treated slaves. The title of a 4th-century comedy by Antiphanes, The Runaway-catcher (Δραπεταγωγός), suggests that slave flight was not uncommon. Conversely, there are no records of a large-scale Greek slave revolt comparable to that of Spartacus in Rome. It can probably be explained by the relative dispersion of Greek slaves, which would have prevented any large-scale planning. Slave revolts were rare, even in Rome. Individual acts of rebellion of slaves against their master, though scarce, are not unheard of; a judicial speech mentions the attempted murder of his master by a boy slave, not 12 years old. Views of Greek slavery Very few authors of antiquity call slavery into question. To Homer and the pre-classical authors, slavery was an inevitable consequence of war. Heraclitus states that "War is the father of all, the king of all ... he turns some into slaves and sets others free". During the classical period, the main justification for slavery was economic. From a philosophical point of view, the idea of "natural" slavery emerged at the same time; thus, as Aeschylus states in The Persians, the Greeks "[o]f no man are they called the slaves or vassals", while the Persians, as Euripides states in Helen, "are all slaves, except one" — the Great King. Hippocrates theorizes about this latent idea at the end of the 5th century BC. According to him, the temperate climate of Anatolia produced a placid and submissive people. This explanation is reprised by Plato, then Aristotle in Politics, where he develops the concept of "natural slavery": "for he that can foresee with his mind is naturally ruler and naturally master, and he that can do these things with his body is subject and naturally a slave." As opposed to an animal, a slave can comprehend reason but "…has not got the deliberative part at all." In parallel, the concept that all men, whether Greek or barbarian, belonged to the same race was being developed by the Sophists and thus that certain men were slaves although they had the soul of a freeman and vice versa. Aristotle himself recognized this possibility and argued that slavery could not be imposed unless the master was better than the slave, in keeping with his theory of "natural" slavery. The Sophists concluded that true servitude was not a matter of status but a matter of spirit; thus, as Menander stated, "be free in the mind, although you are slave: and thus you will no longer be a slave". This idea, repeated by the Stoics and the Epicurians, was not so much an opposition to slavery as a trivialisation of it. The Greeks could not comprehend an absence of slaves. Slaves exist even in the "Cloudcuckooland" of Aristophanes' The Birds as well as in the ideal cities of Plato's Laws or Republic. The utopian cities of Phaleas of Chalcedon and Hippodamus of Miletus are based on the equal distribution of property, but public slaves are used respectively as craftsmen and land workers. The "reversed cities" placed women in power or even saw the end of private property, as in Lysistrata or Assemblywomen, but could not picture slaves in charge of masters. The only societies without slaves were those of the Golden Age, where all needs were met without anyone having to work. In this type of society, as explained by Plato, one reaped generously without sowing. In Telekleides' Amphictyons barley loaves fight with wheat loaves for the honour of being eaten by men. Moreover, objects move themselves—dough kneads itself, and the jug pours itself. Similarly, Aristotle said that slaves would not be necessary "if every instrument could accomplish its own work... the shuttle would weave and the plectrum touch the lyre without a hand to guide them", like the legendary constructs of Daedalus and Hephaestus. Society without slaves is thus relegated to a different time and space. In a "normal" society, one needs slaves. Slavery in Greek antiquity has long been an object of apologetic discourse among Christians, who are typically awarded the merit of its collapse. From the 16th century the discourse became moralizing in nature. The existence of colonial slavery had significant impact on the debate, with some authors lending it civilizing merits and others denouncing its misdeeds. Thus Henri-Alexandre Wallon in 1847 published a History of Slavery in Antiquity among his works for the abolition of slavery in the French colonies. In the 19th century, a politico-economic discourse emerged. It concerned itself with distinguishing the phases in the organisation of human societies and correctly identifying the place of Greek slavery. The influence of Marx is decisive; for him the ancient society was characterized by development of private ownership and the dominant (and not secondary as in other pre-capitalist societies) character of slavery as a mode of production. The Positivists represented by the historian Eduard Meyer (Slavery in Antiquity, 1898) were soon to oppose the Marxist theory. According to him slavery was the foundation of Greek democracy. It was thus a legal and social phenomenon, and not economic. Current historiography developed in the 20th century; led by authors such as Joseph Vogt, it saw in slavery the conditions for the development of elites. Conversely, the theory also demonstrates an opportunity for slaves to join the elite. Finally, Vogt estimates that modern society, founded on humanist values, has surpassed this level of development. In 2011, Greek slavery remains the subject of historiographical debate, on two questions in particular: can it be said that ancient Greece was a "slave society", and did Greek slaves comprise a social class? - A traditional pose in funerary steles, see for instance Felix M. Wassermann, "Serenity and Repose: Life and Death on Attic Tombstones" The Classical Journal, Vol. 64, No. 5, p.198. - J.M.Roberts, The New Penguin History of the World, p.176–177, 223 - Chantraine, s.v. δμώς. - For instance Odyssey 1:398, where Telemachus mentions "the slaves that goodly Odysseus won for [him]". - Used once by Homer in Iliad 7:475 to refer to prisoners taken in war; the line was athetized by Aristarchus of Samothrace following Zenodotus and Aristophanes of Byzantium, see Kirk, p.291. - Chantraine, s.v. ἀνερ. - Definition from LSJ. - Mycenean transliterations can be confusing and do not directly reflect pronunciation; for clarification see the article about Linear B. - Chantraine, s.v. δοῦλος. See also Mactoux (1981). - Chantraine, s.v. οἰκος. - Iliad, 16:244 and 18:152. - Iliad, 23:113. - Chantraine, s.v. θεράπων. - Chantraine, s.v. ἀκόλουθος. - Chantraine, s.v. παῖς. - Cartledge, p.137. - Chantraine, s.v. σῶμα. - Garlan, p.32. - Burkert, p.45. - Garlan, p.35. - Mele, pp.115–155. - Garlan, p.36. - For instance Chryseis (1:12–3, 29–30, 111–5), Briseis (2:688–9), Diomede (6:654–5), Iphis (6:666–8) and Hecamede (11:624–7). - See in the Iliad the pleas of Adrastus the Trojan (1:46–50), the sons of Antimachus (11:131–5) and Lycaon (21:74–96), all begging for mercy in exchange of a ransom. - There are 50 of them in Ulysses' house (22:421) and in Alcinous' house (7:103). - Before his fight with Achilles, Hector predicts for his wife Andromache a life of bondage and mentions weaving and water-fetching (6:454–8). In the Odyssey, servants tend the fire (20:123), prepare the suitors' feast (1:147), grind wheat (7:104, 20:108–9), make the bed (7:340–2) and take care of the guests. - In the Iliad, Chryseis sleeps with Agamemnon, Briseis and Diomede with Achilles, Iphis with Patroclus. In the Odyssey, twelve female servants sleep with the suitors (20:6–8) against Euryclea's direct orders (22:423–425). - Odyssey, 16:140–1. - Odyssey, 11:188–91. - Odyssey, 14:3. - Garlan, p.43. - Odyssey 17:322–323. Online version of Butcher-Lang 1879 translation. - For instance Works and Days, 405. - "κατὰ ταὐτὰ φόνοθ δίκας εἷναι δοῦλον κτείναντι ἢ ἐλεὐτερον." Dareste, Haussoulier and Reinach, 4, 5, 8. - Life of Solon, 1:6. - Apud Athenaeus, 6:265bc = FGrH 115, fgt.122. - Finley (1997), pp.170–171. - Finley (1997), p.180. - Finley (1997), p.148. - Finley (1997), p.149. - Jameson argues in favour of a very large use of slaves; Wood (1983 and 1988) disputes it. - Finley (1997), p.150. - Poroi (On Revenues), 4. - Lauffer, p.916. - Demosthenes, 12:8–19. - Demosthenes, Against Aphobos, 11:9. - Finley (1997), pp.151–152. - Jones, pp.76–79. - Ctesicles, apud Athenaeus 6:272c. - Ctesicles was the author of a history preserved as two fragments in the Athenaeus. - Politics, 252a26–b15. - Lysias, For the invalid, 3. - Athenaeus, 6:264d. - Republic, 9:578d–e. - Thucydide, 8:40, 2. - See Ducrey for further reading. - Thucydides, 6:62 and 7:13. - Garlan, p. 57. - Plutarch, Life of Agesilaus, 7:6. - Xenophon, Hellenica, 1:6, 14. - Diodorus Siculus, 19:53,2. - Plutarch, Life of Alexander, 7:3. - The Greeks made little differentiation between pirates and bandits, both being called lēstai or peiratai. Brulé (1978a), p.2. - See Ormerod, Brulé (1978b) and Gabrielsen for further reading. - Finley (1997), p.230. - Thucydides, 1:5, 3. - Strabo, 14:5, 2. - Brulé (1978a), p.6. - Brulé (1978a), pp.6–7. - Pritchett and Pippin (1956), p.278 and Pritchett (1961), p.27. - Herodotus, 5:6; Philostratus II, Life of Apollonius Tyana, 18:7, 12. - Plassart, pp.151–213. - During the Classical and Hellenistic periods, it was the master who named the slave; this could be the master's name, an ethnic name as mentioned above, a name from their native area (Manes for Lydian, Midas for a Phrygian, etc.), a historical name (Alexander, Cleopatra, etc.). In short, a slave could carry practically any name, but barbarian names could only be given to slaves. Masson, pp.9—21. - Plato, Laws, 777cd; Pseudo-Aristotle, Economics, 1:5. - Garlan, p.61. - Circa 216 BC. Inscriptiones Graecae IX 1², 2, 583. - Hypereides, Against Athenogenes, 15 and 22. - Garlan, p.59. - Finley (1997), p.155. - The Economist, IX. Trans H. G. Dakyns, accessed 16 May 2006. - Pritchett and Pippin, pp.276–281. - Garlan, p.58. Finley (1997), p.154–155 remains doubtful. - Garlan, p.58. - Carlier, p.203. - Finley (1997), p.147. - Finley (1997), pp.165–89. - Garlan, p.47. - Antiphon, First Tetralogy, 2:7, 4:7; Demosthenes, Against Pantenos, 51 (2) and Against Evergos, 14, 15, 60. - For instance Lycurgus, Against Leocrates, 29. - Aeschines, Against Timarchus, 17. - Panathenaicus, 181.http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0144%3Aspeech%3D12%3Asection%3D181 - Morrow, p.212. - Lycurgus, Against Leocrates, 66. - Morrow, p.213. - Aristotle, Constitution of the Athenians, 57:3. - Burkert, p.259. - Carlier, p.204. - Old Oligarch, Constitution of the Athenians, 10. - Pausanias, 1:29, 6. - Plutarch, Life of Themistocles, 10:4–5. - Aeschines, Against Timarchos 1.138–139 - Diogenes Laertius, Lives of the Philosophers, 2.105 - Wilhelm Kroll "Knabenliebe" in Pauly-Wissowa, Realencyclopaedie der klassischen Altertumswissenschaft, vol. 11, cols. 897–906 - Lévy (1995), p.178. - Finley (1997), p.200. - Finley (1997), p.201. - Lévy (1995), p.179. - Aristotle, Constitution of the Athenians, See also 1:2 and Plutarch, Life of Solon, 13:2. - Literally, "six-parters" or "sixthers", because they owed either one-sixth or five-sixths (depending on the interpretation) of their harvest. See Von Fritz for further reading. - Deuteronomy, 15:12–17. - Constitution of the Athenians 12:4. Trans. by Sir Frederic Kenyon, accessed 15 May 2006. - Finley, p.174. - Finley (1997), p.160. - Plutarch, Life of Solon 23.2. - Brulé (1992), p.83. - Garlan, p.79. - Garlan, p.80. - Dunant and Pouilloux, pp.35–37, no.173. - Demosthenes, Against Neaira, 59:29–32. - See Foucart for further reading. - Garlan, p.82. - Garlan, p.83. - Garlan, p.84. - Laws, 11:915 a–c. - Garlan, p.87. - Herakleides Lembos, fgt. 9 Dilts and Suidas, s.v. Ἀλκμάν. - Suidas, s.v. Φιλόξενος. - Life of Lycurgus, 12:13. - Life of Lycurgus, 16:5; Life of Alcibiades, 5:3. - "…ἀνδραπόδων κτήσει τῶν τε ἄλλων καὶ τῶν εἱλωτικῶν", Alcibiades I, 122d. - "…δοὐλοις καὶ Εἴλωσι", Comp. Lyc. et Num., 2. - Oliva, pp.172–173; Ducat, p.55; Lévy (2003), pp.112–113. - Economics, 1344a35. - Xenophon, Economics, 13:6. - Politics, I, 3, 14. - Peace, v.743–749. Trans. Ian Johnston, 2006, accessed 17 May 06. - Garlan, p.147. - Garlan, p.148. - Finley (1997), p. 165. - Morrow, p.210. See Plato, The Republic, 8:563b; Demosthenes, Third Philippic, 3; Aeschines, Against Timarchos, 54; Aristophanes, Assemblywomen, 721–22 and Plautus, Stichus, 447–50. - Constitution of the Athenians, I, 10. - Thucydides (7:27). - Apud Athenaeus, 161e. - Cartledge, p.139. - Garlan, p.180. - Finley (1997), p.162–3. - Antiphon, On the Murder of Herodes, 69. - Heraclitus, frag.53. - Mactoux (1980), p.52. - The Persians, v.242. Trans. ed. Herbert Weir Smyth, accessed 17 May 2006. - Helen, v.276. - Hippocratic corpus, Of Airs, Waters, and Places (Peri aeron hydaton topon), 23. - Republic, 4:435a–436a. - Politics, 7:1327b. - Politics, 1:2, 2. Trans. H. Rackham, accessed 17 May 2006. - Politics, 1:13, 17. - John D. Bury and Russell Meiggs (4th ed. 1975): A History of Greece to the Death of Alexander the Great. New York: St. Martin's Press, page 375 - For instance Hippias of Elis apud Platon, Protagoras, 337c; Antiphon, Pap. Oxyr., 9:1364. - An idea already expressed by Euripides, Ion, 854–856frag.831. - Politics, 1:5, 10. - Menander, frag. 857. - Garlan, p.130. - Republic, 10:469b sq. and 470c. - Apud Aristotle, Politics, 1267b. - Apud Aristotle, Politics, 1268a. - Politics, 271a–272b. - Apud Athenaeus, 268 b–d. - Aristotle, Politics, Book 1 Part 4 - Garlan, p.8. - Garlan, p.10–13. - Garlan, p.13–14. - Garlan, p.19–20. - Garlan, p.201. - This article draws heavily on the Esclavage en Grèce antique article in the French-language Wikipedia, which was accessed in the version of 17 May 2006. - (French) Brulé, P. (1978a) "Signification historique de la piraterie grecque ", Dialogues d'histoire ancienne no.4 (1978), pp. 1–16. - (French) Brulé, P. (1992) "Infanticide et abandon d'enfants", Dialogues d'histoire ancienne no.18 (1992), pp. 53–90. - Burkert, W. Greek Religion. Oxford: Blackwell Publishing, 1985. ISBN 0-631-15624-0, originally published as Griechische Religion der archaischen und klassischen Epoche. Stuttgart: Verlag W. Kohlhammer, 1977. - (French) Carlier, P. Le IVe siècle grec jusqu'à la mort d'Alexandre. Paris: Seuil, 1995. ISBN 2-02-013129-3 - Cartledge, P.. "Rebels and Sambos in Classical Greece", Spartan Reflections. Berkeley: University of California Press, 2003, p. 127–152 ISBN 0-520-23124-4 - (French) Chantraine, P. Dictionnaire étymologique de la langue grecque. Paris: Klincksieck, 1999 (new edition). ISBN 2-252-03277-4 - (French) Dareste R., Haussoullier B., Reinach Th. Recueil des inscriptions juridiques grecques, vol.II. Paris: E. Leroux, 1904. - (French) Ducat, Jean. Les Hilotes, BCH suppl.20. Paris: publications of the École française d'Athènes, 1990 ISBN 2-86958-034-7 - (French) Dunant, C. and Pouilloux, J. Recherches sur l'histoire et les cultes de Thasos II. Paris: publications of the École française d'Athènes, 1958. - Finley, M. (1997). Économie et société en Grèce ancienne. Paris: Seuil, 1997 ISBN 2-02-014644-4, originally published as Economy and Society in Ancient Greece. London: Chatto and Windus, 1981. - Garlan, Y. Les Esclaves en Grèce ancienne. Paris: La Découverte, 1982. 1982 ISBN 2-7071-2475-3, translated in English as Slavery in Ancient Greece. Ithaca, N.Y.: Cornell University Press, 1988 (1st edn. 1982) ISBN 0-8014-1841-0 - Kirk, G.S. (editor). The Iliad: a Commentary, vol.II (books 5–8). Cambridge: Cambridge University Press, 1990. ISBN 0-521-28172-5 - Jameson, M.H. "Agriculture and Slavery in Classical Athens", Classical Journal, no.73 (1977–1978), pp. 122–145. - Jones, A.H.M.. Athenian Democracy. Oxford: Blackwell Publishing, 1957. - (German) Lauffer, S. "Die Bergwerkssklaven von Laureion", Abhandlungen no.12 (1956), pp. 904–916. - (French) Lévy, E. (1995). La Grèce au Ve siècle de Clisthène à Socrate. Paris: Seuil, 1995 ISBN 2-02-013128-5 - (French) Lévy, E. (2003). Sparte. Paris: Seuil, 2003 ISBN 2-02-032453-9 - (French) Mactoux, M.-M. (1980). Douleia: Esclavage et pratiques discursives dans l'Athènes classique. Paris: Belles Lettres, 1980. ISBN 2-251-60250-X - (French) Mactoux, M.-M. (1981). "L'esclavage comme métaphore : douleo chez les orateurs attiques", Proceedings of the 1980 GIREA Workshop on Slavery, Kazimierz, 3–8 November 1980, Index, 10, 1981, pp. 20–42. - (French) Masson, O. "Les noms des esclaves dans la Grèce antique", Proceedings of the 1971 GIREA Workshop on Slavery, Besançon, 10–11 mai 1971. Paris: Belles Lettres, 1973, pp. 9–23. - (French) Mele, A. "Esclavage et liberté dans la société mycénienne", Proceedings of the 1973 GIREA Workshop on Slavery, Besançon 2–3 mai 1973. Paris: Les Belles Lettres, 1976. - Morrow, G.R. "The Murder of Slaves in Attic Law", Classical Philology, Vol. 32, No. 3 (Jul., 1937), pp. 210–227. - Oliva, P. Sparta and her Social Problems. Prague: Academia, 1971. - (French) Plassart, A. "Les Archers d'Athènes," Revue des études grecques, XXVI (1913), pp. 151–213. - Pomeroy, S.B. Goddesses, Whores, Wives and Slaves. New York: Schoken, 1995. ISBN 0-8052-1030-X - Pritchett, W.K. and Pippin, A. (1956). "The Attic Stelai, Part II", Hesperia, Vol.25, No.3 (Jul.–Sep., 1956), pp. 178–328. - Pritchett (1961). "Five New Fragments of the Attic Stelai", Hesperia, Vol.30, No. 1 (Jan.–Mar., 1961), pp. 23–29. - Wood, E.M. (1983). "Agriculture and Slavery in Classical Athens", American Journal of Ancient History No.8 (1983), pp. 1–47. - Von Fritz, K. "The Meaning of ἙΚΤΗΜΟΡΟΣ", The American Journal of Philology, Vol.61, No.1 (1940), pp. 54–61. - Wood, E.M. (1988). Peasant-Citizen and Slave: The Foundations of Athenian Democracy. New York: Verso, 1988 ISBN 0-86091-911-0. - General studies - Bellen, H., Heinen H., Schäfer D., Deissler J., Bibliographie zur antiken Sklaverei. I: Bibliographie. II: Abkurzungsverzeichnis und Register, 2 vol. Stuttgart: Steiner, 2003. ISBN 3-515-08206-9 - Bieżuńska-Małowist I. La Schiavitù nel mondo antico. Naples: Edizioni Scientifiche Italiane, 1991. - Finley, M.: - Garnsey, P. Ideas of Slavery from Aristotle to Augustine. Cambridge: Cambridge University Press, 1996. ISBN 0-521-57433-1 - De Ste-Croix, G.E.M. The Class Struggle in the Ancient Greek World. London: Duckworth; Ithaca, N.Y.: Cornell University Press, 1981. ISBN 0-8014-1442-3 - Vidal-Naquet, P.: - "Women, Slaves and Artisans", third part of The Black Hunter : Forms of Thought and Forms of Society in the Greek World. Baltimore: Johns Hopkins University Press, 1988 (1st edn. 1981). ISBN 0-8018-5951-4 - with Vernant J.-P. Travail et esclavage en Grèce ancienne. Bruxelles: Complexe, "History" series, 2006 (1st edn. 1988). ISBN 2-87027-246-4 - Wiedemann, T. Greek and Roman Slavery. London: Routledge, 1989 (1st edn. 1981). ISBN 0-415-02972-4 - Westermann, W.L. The Slave Systems of Greek and Roman Antiquity. Philadelphia: The American Philosophical Society, 1955. - Specific studies - Brulé, P. (1978b). La Piraterie crétoise hellénistique, Belles Lettres, 1978. ISBN 2-251-60223-2 - Brulé, P. and Oulhen, J. (dir.). Esclavage, guerre, économie en Grèce ancienne. Hommages à Yvon Garlan. Rennes: Presses universitaires de Rennes, "History" series, 1997. ISBN 2-86847-289-3 - Ducrey, P. Le traitement des prisonniers de guerre en Grèce ancienne. Des origines à la conquête romaine. Paris: De Boccard, 1968. - Foucart, P. "Mémoire sur l'affranchissement des esclaves par forme de vente à une divinité d'après les inscriptions de Delphes", Archives des missions scientifiques et littéraires, 2nd series, vol.2 (1865), pp. 375–424. - Hunt, P. Slaves, Warfare, and Ideology in the Greek Historians. Cambridge: Cambridge University Press, 1998. ISBN 0-521-58429-9 - Ormerod, H.A. Piracy in the Ancient World. Liverpool: Liverpool University Press, 1924. - Gabrielsen, V. "La piraterie et le commerce des esclaves", in E. Erskine (ed.), Le Monde hellénistique. Espaces, sociétés, cultures. 323-31 av. J.-C.. Rennes: Presses Universitaires de Rennes, 2004, pp. 495–511. ISBN 2-86847-875-1 |Wikimedia Commons has media related to Slavery in Ancient Greece.| - (French) GIREA, The International Group for Research on Slavery in Antiquity (in French) - Greek law bibliographic database at Nomoi - Documents on Greek slavery on the Ancient History Sourcebook. - Manumission records of women at Delphi at attalus.org - (French) Index thématiques de l'esclavage et de la dépendance Subject index on slavery and related topics, by author. - (French) Bibliothèque numérique ISTA Free library
https://en.wikipedia.org/wiki/Slavery_in_Ancient_Greece
4.21875
Plaçage was a recognized extralegal system in French and Spanish slave colonies of North America (including the Caribbean) by which ethnic European men entered into the equivalent of common-law marriages with women of color, of African, Native American and mixed-race descent. The term comes from the French placer meaning "to place with". The women were not legally recognized as wives but were known as placées; their relationships were recognized among the free people of color as mariages de la main gauche or left-handed marriages. They became institutionalized with contracts or negotiations that settled property on the woman and her children, and in some cases gave them freedom if enslaved. The system flourished throughout the French and Spanish colonial periods, reaching its zenith during the latter, between 1769 and 1803. It was most practiced in New Orleans, where planter society had created enough wealth to support the system. It also took place in the Latin-influenced cities of Natchez and Biloxi, Mississippi; Mobile, Alabama; St. Augustine and Pensacola, Florida; as well as Saint-Domingue (now the Republic of Haiti). Plaçage became associated with New Orleans as part of its cosmopolitan society. History and development of the plaçage system The plaçage system developed from the predominance of men among early colonial populations, who took women as consorts from Native Americans and enslaved Africans. Later there developed a class of free people of color in Louisiana, and especially New Orleans, during the colonial years, from whom wealthy men would choose. In this period there was a shortage of European women, as the colonies were dominated in the early day by male explorers and colonists. Given the harsh conditions in Louisiana, persuading women to follow the men was not easy. France recruited willing farm- and city-dwelling women, known as casket or casquette girls, because they brought all their possessions to the colonies in a small trunk or casket. France also sent women convicted along with their debtor husbands, and in 1719, deported 209 women felons "who were of a character to be sent to the French settlement in Louisiana." (France also relocated young women orphans known as King's Daughters (French: filles du roi) to their colonies for marriage: to both Canada and Louisiana.) Historian Joan Martin maintains that there is little documentation that "casket girls", considered among the ancestors of white French Creoles, were brought to Louisiana. The Ursuline order of nuns supposedly chaperoned the casket girls until they married, but the order has denied they followed this practice. Martin suggests this was a myth, and that interracial relationships occurred from the beginning of the encounter among Europeans, Native Americans and Africans. She also writes that some Creole families who today identify as white had ancestors during the colonial period who were African or multiracial, and whose descendants married white over generations. Through warfare and raids, Native American women were often captured to be traded, sold, or taken as wives. At first, the colony generally imported male Africans to use as slave labor because of the heavy work of clearing to develop plantations. Over time, it also imported African female slaves. Marriage between the races was forbidden according to the Code Noir of the eighteenth century, but interracial sex continued. The upper class European men during this period often did not marry until their late twenties or early thirties. Premarital sex with an intended white bride, especially if she was of high rank, was not permitted socially. Free people of color White male colonists, often the younger sons of noblemen, military men, and planters, who needed to accumulate some wealth before they could marry, took women of color as consorts before marriage. Merchants and administrators also followed this practice if they were wealthy enough. A white man might rape a slave as young as twelve. When the women bore children, they were sometimes emancipated along with their children. Both the woman and her children might take the surnames of the man. When Creole men reached an age when they were expected to marry, some also kept their relationships with their placées, but this was less common. A wealthy white Creole man could have two (or more) families: one legal, and the other not. Their mixed-race children became the nucleus of the class of free people of color or gens de couleur libres in Louisiana and Saint-Domingue. After the Haitian Revolution in the late 18th and early 19th centuries, many refugees came to New Orleans, adding a new wave of French-speaking free people of color. During the period of French and Spanish rule, the gens de couleur came to constitute a third class in New Orleans and other former French cities - between the white Creoles and the mass of black slaves. They had certain status and rights, and often acquired education and property. Later their descendants became leaders in New Orleans, holding political office in the city and state, and becoming part of what developed as the African-American middle class in the United States. By 1788, 1500 Creole women of color and black women were being maintained by white men. Certain customs had evolved. It was common for a wealthy, married Creole to live primarily outside New Orleans on his plantation with his white family. He often kept a second address in the city to use for entertaining and socializing among the white elite. He had built or bought a house for his placée and their children. She and her children were part of the society of Creoles of color. The white world might not recognize the placée as a wife legally and socially, but she was recognized as such among the Creoles of color. Some of the women acquired slaves and plantations. Particularly during the Spanish colonial era, a woman might be listed as owning slaves; these were sometimes relatives who she intended to free after earning enough money to buy their freedom. While in New Orleans (or other cities), the man would cohabit with the placée as an official 'boarder' at her Creole cottage or house. Many were located near Rampart Street in New Orleans-—once the demarcation line or wall between the city and the frontier. Other popular neighborhoods for Creoles of color were the Faubourg Marigny and Tremé. If the man was not married, he might keep a separate residence, preferably next door or in the same or next block as his placée. He often took part in and arranged for the upbringing and education of their children. For a time both boys and girls were educated in France, as there were no schools in New Orleans for mixed-race children. As supporting such a plaçage arrangement(s) ran into thousands of dollars per year, it was limited to the wealthy. Inheritance and work Upon the death of her protector, the placée and her family could, on legal challenge, expect up to a third of the man's property. Some white lovers tried, and succeeded, in making their mixed-race children primary heirs over other white descendants or relatives. The women in these relationships often worked to develop assets: acquiring property, running a legitimate rooming-house, or a small business as a hairdresser, marchande (female street or country merchant/vendor), or a seamstress. She could also become a placée to another white Creole. She sometimes taught her daughters to become placées, by education and informal schooling in dress, comportment, and ways to behave. A mother negotiated with a young man for the dowry or property settlement, sometimes by contract, for her daughter if a white Creole were interested in her. A former placée could also marry or to cohabit with a Creole man of color and have more children. Contrary to popular misconceptions, placées were not and did not become prostitutes. Creole men of color objected to the practice as denigrating the virtue of Creole women of color, but some, as descendants of white males, benefited by the transfer of social capital. Martin writes, "They did not choose to live in concubinage; what they chose was to survive." In the late 19th and early 20th centuries, after Reconstruction and with the reassertion of white supremacy across the former Confederacy, the white Creole historians, Charles Gayarré and Alcée Fortier, wrote histories that did not address plaçage in much detail. They suggested that little race mixing had occurred during the colonial period, and that the placées had seduced or led white Creole men astray. They wrote that the French Creoles (in the sense of having long been native to Louisiana) were ethnic Europeans who were threatened by the spectre of race-mixing like other Southern whites. Gayarré, when younger, was said to have taken a woman of color as his placée and she had their children, to his later shame. He married a white woman late in life. His earlier experience inspired his novel, Fernando de Lemos. Marie Thérèse Metoyer Marie Thérèse Metoyer dite Coincoin became an icon of black female entrepreneurship in colonial Louisiana. She was born at the frontier outpost of Natchitoches on Cane River in August 1742 as a slave of the post founder, the controversial explorer Louis Juchereau de St. Denis. She would be, for twenty years, the placée of a French colonial merchant-turned-planter, Claude Thomas Pierre Métoyer two years her junior. At the onset of their plaçage, she was already the mother of five children; she would bear ten more to Métoyer. In 1778, he freed her after the parish priest filed charges against Coincoin as a "public concubine" and threatened to have her sold at New Orleans if they did not end their relationship. As a free woman, she remained with Métoyer until 1788, when his growing fortune persuaded him to take a wife who could provide legal heirs. (He chose another Marie Thérèse, a white Créole of French and German birth.) In setting Coincoin aside, Métoyer donated to her his interest in 80 arpents, about 68 acres (280,000 m2) of unpatented land, adjacent to his plantation, to help support their free-born offspring. On that modest tract, Coincoin planted tobacco, a valuable commodity in the struggling colony. She and her children trapped bears and wild turkeys for sales of meat, hide, and oil locally and at the New Orleans market. She also manufactured medicine, a skill shared by her freed-slave sister Marie Louise dite Mariotte and likely one acquired from their African-born parents. With this money, she progressively bought the freedom of four of her first five children and several grandchildren, before investing in three African-born slaves to provide the physical labor that became more difficult as she aged. After securing a colonial patent on her homestead in 1794, she petitioned for and was given a land concession from the Spanish crown. On that piney-woods tract of 800 arpents (667 ac) on Old Red River, about 5 mi from her farmstead, she set up a vacherie (a ranch) and engaged a Spaniard to tend her cattle. Shortly before her death in 1816, Coincoin sold her homestead and divided her remaining property (her piney-woods land, the three African slaves, and their offspring) among her own progeny. As often happened among the children of plaçages, Coincoin's one surviving daughter by Métoyer, Marie Susanne, became a placée also. As a young woman, apparently with the blessing of both parents, she entered into a relationship with a newly arrived physician, Joseph Conant from New Orleans. When he left Cane River, soon after the birth of their son, she formed a second and lifelong plaçage with a Cane River planter, Jean Baptiste Anty. As a second-generation entrepreneur, Susanne became far more successful than her mother and died in 1838 leaving an estate of $61,600 (equivalent to $1,500,000 in 2009 currency). Modern archaeological work at the site of Coincoin's farmstead is documenting some of the aspects of her domestic life. A mid-nineteenth century dwelling, now dubbed the Coincoin-Prudhomme House although it was not the actual site of her residence, commemorates her within the Cane River National Heritage Area. Popular lore also has, erroneously, credited her with the ownership of a Cane River plantation founded by her son Louis Metoyer, known today as Melrose Plantation, and its historic buildings Yucca House and African House. Her eldest half-French son, Nicolas Augustin Métoyer, founded St. Augustine Parish (Isle Brevelle) Church, the spiritual center of Cane River's large community of Creoles of color who trace their heritage to Coincoin. Eulalie de Mandéville There were many other examples of white Creole fathers who reared and carefully and quietly placed their daughters of color with the sons of known friends or family members. This occurred with Eulalie de Mandéville, the elder half-sister of color to the eccentric nobleman, politician, and land developer Bernard Xavier de Marigny de Mandéville. Taken from her slave mother as a baby, and partly raised by a white grandmother, 22-year-old Eulalie was "placed" by her father, Count Pierre Enguerrand Philippe, Écuyer de Mandéville, Sieur de Marigny, with Eugène de Macarty, a member of the famous French-Irish clan in 1796. Their alliance resulted in five children and lasted almost fifty years. Macarty, like some white Creoles, never married a white woman. (In contrast to the Macartys' stable relationship, Eugène's brother Augustin de Macarty was married and was said to have had numerous, complex affairs with Creole women of color. When he died, several women made claims on behalf of their children against his estate. On his deathbed in 1845, Eugène de Macarty married Eulalie. He willed her all of his money and property, then worth $12,000. His white relatives, including his niece, Marie Delphine de Macarty LaLaurie, contested the will. The court upheld his will. After Eulalie's death, their surviving children defeated another attempt by Macarty's relatives to claim his estate, by then worth more than $150,000. Eulalie de Mandéville de Macarty became a successful marchande and ran a dairy. She died in 1848. Rosette Rochon was born in 1767 in colonial Mobile, the daughter of Pierre Rochon, a shipbuilder from a Québécois family (family name was Rocheron in Québec), and his mulâtresse slave-consort Marianne, who bore him five other children. Once Rosette reached a suitable age, she became the consort of a Monsieur Hardy, with whom she relocated to the colony of Saint Domingue. During her sojourn there, Hardy must have died or relinquished his relationship with her; for in 1797 during the Haitian Revolution, she escaped to New Orleans, where she later became the placée of Joseph Forstal and Charles Populus, both wealthy white New Orleans Creoles. Rochon came to speculate in real estate in the French Quarter; she eventually owned rental property, opened grocery stores, made loans, bought and sold mortgages, and owned and rented out (hired out) slaves. She also traveled extensively back and forth to Haiti, where her son by Hardy had become a government official in the new republic. Her social circle in New Orleans once included Marie Laveau, Jean Lafitte, and the free black contractors and real estate developers Jean-Louis Doliolle and his brother Joseph Doliolle. In particular, Rochon became one of the earliest investors in the Faubourg Marigny, acquiring her first lot from Bernard de Marigny in 1806. Bernard de Marigny, the Creole speculator, refused to sell the lots he was subdividing from his family plantation to anyone who spoke English. While this turned out to be a losing financial decision, Marigny felt more comfortable with the French-speaking, Catholic free people of color (having relatives, lovers, and even children on this side of the color line). Consequently, much of Faubourg Marigny was built by free black artisans for free people of color or for French-speaking white Creoles. Rochon remained largely illiterate, dying in 1863 at the age of 96, leaving behind an estate valued at $100,000 (today, an estate worth a million dollars). Marie Laveau (also spelled Leveau, Laveaux), known as the voodoo queen of New Orleans, was born between 1795 and 1801 as the daughter of a white Haitian plantation owner, Charles Leveaux, and his mixed black and Indian placée Marguerite Darcantel (or D'Arcantel). Because there were so many whites as well as free people of color in Haiti with the same names, Leveaux could also have been a free man of color who owned slaves and property as well. All three may have escaped Haiti along with thousands of other Creole whites and Creoles of color during the slave uprisings that culminated in the French colony's becoming the only independent black republic in the New World. At 17, Marie married a Creole man of color popularly known as Jacques Paris (however, in some documents, he is known as Santiago Paris). Paris either died, disappeared or deliberately abandoned her (some accounts also relate that he was a merchant seaman or sailor in the navy) after she produced a daughter. Laveau was styling herself as the Widow Paris and was a hairdresser for white matrons (she was also reckoned to be an herbalist and yellow fever nurse) when she met Louis-Christophe Dumesnil de Glapion and in the early 1820s, they became lovers. Marie was just beginning her spectacular career as a voodoo practitioner (she would not be declared a 'queen' until about 1830), and Dumesnil de Glapion was a fiftyish white Creole veteran of the Battle of New Orleans with relatives on both sides of the color line. Recently, it's been alleged that Dumesnil de Glapion was so in love with Marie, he refused to live separately from his placée according to racial custom. In an unusual decision, Dumesnil de Glapion passed as a man of color in order to live with her under respectable circumstances—thus explaining the confusion many historians have had whether he was truly white or black. Although it is popularly thought that Marie presented Dumesnil de Glapion with fifteen children, only five are listed in vital statistics and of these, two daughters—one the famous Marie Euchariste or Marie Leveau II—lived to adulthood. Marie Euchariste closely resembled her mother and startled many who thought that Marie Leveau had been resurrected by the black arts, or could be at two places at once, beliefs that the daughter did little to correct. Sebastopol This plantation house and property was built and cultivated by Don Pedro Morin in the 1830s in St. Bernard Parish, Louisiana. It was bought twenty years later by Colonel Ignatius Szymanski a Polish American who later served in the Confederate Army, and renamed Sebastopol. At his death, Colonel Szymanski willed this estate to his placée Eliza Romain, a free woman of color, and to their son John Szymanski. The term quadroon is a fractional one referring to a person with one white and one mulatto parent, some courts would have considered one-fourth Black. The quadroon balls were social events designed to encourage mixed-race women to form liaisons with wealthy white men through a system of concubinage known as plaçage. (Guillory 68-9). Monique Guillory writes about quadroon balls that took place in New Orleans, the city most strongly associated with these events. She approaches the balls in context of the history of a building the structure of which is now the Bourbon Orleans Hotel. Inside is the Orleans Ballroom, a legendary, if not entirely factual, location for the earliest quadroon balls. In 1805, a man named Albert Tessier began renting a dance hall where he threw twice weekly dances for free quadroon women and white men only (80). These dances were elegant and elaborate, designed to appeal to wealthy white men. Although race mixing was prohibited by New Orleans law, it was common for white gentleman to attend the balls, sometimes stealing away from white balls to mingle with the city's quadroon female population. The principal desire of quadroon women attending these balls was to become plaçee as the mistress of a wealthy gentleman, usually a young white Creole or a visiting European (81). These arrangements were a common occurrence, Guillory suggests, because the highly educated, socially refined quadroons were prohibited from marrying white men and were unlikely to find Black men of their own status. A quadroon's mother usually negotiated with an admirer the compensation that would be received for having the woman as his mistress. Typical terms included some financial payment to the parent, financial and/or housing arrangements for the quadroon herself, and, many times, paternal recognition of any children the union produced. Guillory points out that some of these matches were as enduring and exclusive as marriages. A beloved quadroon mistress had the power to destabilize white marriages and families, something she was much resented for. According to Guillory, the system of plaçage had a basis in the economics of mixed race. The plaçage of black women with white lovers, Guillory writes, could take place only because of the socially determined value of their light skin, the same light skin that commanded a higher price on the slave block, where light skinned girls fetched much higher prices than did prime field hands (82). Guillory posits the quadroon balls as the best among severely limited options for these near-white women, a way for them to control their sexuality and decide the price of their own bodies. She contends, "The most a mulatto mother and a quadroon daughter could hope to attain in the rigid confines of the black/white world was some semblance of economic independence and social distinction from the slaves and other blacks" (83). She notes that many participants in the balls were successful in actual businesses when they could no longer rely on an income from the plaçage system. She speculates they developed business acumen from the process of marketing their own bodies. Treatment in fiction - Isabel Allende, Island Beneath the Sea. A novel about a mixed-race slave who is brought to Saint-Domingue and is eventually taken to New Orleans with her master's family. Her quadroon daughter is introduced to society as a placée. - George Washington Cable, The Grandissimes, A Story of Creole Life (1880) by . He also wrote the short stories, "Títe Poulette", "Madame John's Legacy" and "Madame Delphine," which portrayed the placée as societal outcast. - William Faulkner*, Absalom, Absalom!. A young man is engaged to a woman until it is found out that he is already involved with a placée in New Orleans and has a child with her. - Edna Ferber, Saratoga Trunk. The book was later adapted as a film by the same name, starring Ingrid Bergman and Gary Cooper. But it, like the film, falls apart after the action and the heroine move on to Saratoga Springs, New York. - Barbara Hambly, The Benjamin January Mysteries. This series of novels features Benjamin January, a free man of color, in New Orleans in the 1830s. His mother and half-sister are also featured; both are placées. His wife is the daughter of a placée. - Anne Rice, The Feast of All Saints. A coming of age novel about a young man making his way in Creole New Orleans. it was adapted as a film by the same name. - Patricia Vaughn, Shadows on the Bayou., A historical romance following the life of Sylvia Dupont, a young woman raised to be a placée. Dupont marries a free man of color and struggles with the consequences. - Marcus Gardley, The House That Will Not Stand premiered at Berkeley Repertory Theatre, January 31 - March 16, 2014. - Beyoncé Knowles, in her "Formation" music video, features visuals of placées. This song was released February 6, 2016. - Chained to the Rock of Adversity, To Be Free, Black & Female in the Old South, edited by Virginia Meacham Gould, University of Georgia Press, 1998 - Katy F. Morlas, "La Madame et la Mademoiselle," graduate thesis in history, Louisiana State University and Agricultural and Mechanical College, 2003 - Joan M. Martin, Placage and the Louisiana Gens de Couleur Libre, in Creole, edited by Sybil Kein, Louisiana State University Press, Baton Rouge, 2000. - Monique Guillory, "Under One Roof: The Sins and Sanctity of the New Orleans Quadroon Balls," in Race Consciousness, edited by Judith Jackson Fossett and Jeffrey A. Tucker, New York University Press, 1997. - Mills, Elizabeth Shown. "Marie Thérèse Coincoin (1742–1816): Slave, Slave Owner, and Paradox," Chapter 1 in Janet Allred and Judy Gentry, ed., Louisiana Women: Their Lives and Times (Athens, Ga.: University of Georgia Press, 2009), chap. 1, pages 10-29; <Louisiana Women: Their Lives and Times - Google books - Mills, Gary B. The Forgotten People: Cane River's Creoles of Color. Baton Rouge: Louisiana State University Press, 1977. - Mills, Elizabeth Shown. "Which Marie Louise is 'Mariotte'? Sorting Slaves with Common Names." National Genealogical Society Quarterly 94 (September 2006): 183–204; archived online at Historic Pathways . - Morlas, ibid. - Violet Harrington Bryan, "Marcus Christian's Treatment of Les Gens de Couleur Libre," in Creole, edited by Sybil Kein, Louisiana State University Press, Baton Rouge, 2000. - Caryn Cosse Bell, "The Real Marie Laveau," review of Voodoo Queen: The Spirited Lives of Marie Laveau, by Martha Ward, University Press of Mississippi, Jackson, 2004. - Recent books - The Free People of Color of New Orleans, An Introduction, by Mary Gehman and Lloyd Dennis, Margaret Media, Inc., 1994. - Africans in Colonial Louisiana: The Development of Afro-Creole Culture in the Eighteenth Century, by Gwendolyn Midlo Hall, Louisiana State University Press, 1995. - Creole New Orleans, Race and Americanization, by Arnold R. Hirsch and Joseph Logsdon, Louisiana State University Press, 1992. - Bounded Lives, Bounded Places: Free Black Society in Colonial New Orleans, by Kimberly S. Hanger. - Afristocracy: Free Women of Color and the Politics of Race, Class, and Culture, by Angela Johnson-Fisher, Verlag, 2008. - Contemporary accounts - Travels by His Highness Duke Bernhard of Saxe-Weimar-Eisenach through North America in the years 1825 and 1826, by Bernhard, Duke of Saxe-Weimar-Eisenach; William Jeronimus and C.J. Jeronimus, University Press of America, 2001. (The Duke relates his visits to quadroon balls as a tourist in New Orleans.) - Voyage to Louisiana, (An abridged translation from the original French by Stuart O. Landry) by C.C. Robin, Pelican Publishing Co., 1966. (Robin visited Louisiana just after its purchase by the Americans and resided there for two years.) - Mon Cher, Creole genealogical newsletter, dated June 20, 2003, on the genealogy of Marie Laveau, also related to the Trudeaus, page 5. <--broken link, April 2015. - Information about the life of Marie Thérèse Coincoin Metoyer. - History of 918 Barracks Street in the French Quarter, where Eugène Macarty purchased and then built another home for his placée, Eulalie Mandeville (fwc; for free woman of color) and their children. - Website of Louisiana Creoles of color. - Website of the Musée Rosette Rochon, located on 1515 Pauger Street, Marigny, New Orleans. This house, which survived Hurricane Katrina, is the only extant residence built by Mme. Rochon.
https://en.wikipedia.org/wiki/Pla%C3%A7age
4
Definitions for harlem renaissance This page provides all possible meanings and translations of the word harlem renaissance a period in the 1920s when African-American achievements in art and music and literature flourished The Harlem Renaissance was a cultural movement that spanned the 1920s. At the time, it was known as the "New Negro Movement", named after the 1925 anthology by Alain Locke. Though it was centered in the Harlem neighborhood of New York City, many French-speaking black writers from African and Caribbean colonies who lived in Paris were also influenced by the Harlem Renaissance. The Harlem Renaissance is generally considered to have spanned from about 1919 until the early or mid-1930s. Many of its ideas lived on much longer. The zenith of this "flowering of Negro literature", as James Weldon Johnson preferred to call the Harlem Renaissance, was placed between 1924 and 1929. Sample Sentences & Example Usage The influence of his work by Harlem Renaissance artists is evident. ”Aberjhani is also known as author of Encyclopedia of the Harlem Renaissance, The Bridge of Silver Wings, and The Wisdom of W.E.B. Dubois. He publishes often in various publications, print and online. His poetry has an intensely intimate courage, the sort we would all wish to have, but too often hold protectively back.” “We are drawn to the Harlem Renaissance because of the hope for black uplift and interracial interaction and empathy that it embodied and because there is a certain element of romanticism associated with the era’s creativity, its seemingly larger-than-life heroes and heroines, and its most brilliantly lit terrain, Harlem, USA.” Images & Illustrations of harlem renaissance Find a translation for the harlem renaissance definition in other languages: Select another language: Discuss these harlem renaissance definitions with the community: Word of the Day Would you like us to send you a FREE new word definition delivered to your inbox daily? Use the citation below to add this definition to your bibliography: "harlem renaissance." Definitions.net. STANDS4 LLC, 2016. Web. 13 Feb. 2016. <http://www.definitions.net/definition/harlem renaissance>.
http://www.definitions.net/definition/harlem%20renaissance
4.03125
Advancing Basic Science for Humanity 08/15/2012 - Phoenix Cluster Sets Record Pace at Forming Stars (Originally published by NASA) August 15, 2012 Astronomers have found an extraordinary galaxy cluster -- one of the largest objects in the Universe -- that is breaking several important cosmic records. Observations of this cluster, known as the Phoenix Cluster, with NASA's Chandra X-ray Observatory, the NSF’s South Pole Telescope and eight other world-class observatories, may force astronomers to rethink how these colossal structures, and the galaxies that inhabit them, evolve. Stars are forming in the Phoenix Cluster at the highest rate ever observed for the middle of a galaxy cluster. The object is also the most powerful producer of X-rays of any known cluster, and among the most massive of clusters. The data also suggest that the rate of hot gas cooling in the central regions of the cluster is the largest ever observed. This galaxy cluster has been dubbed the "Phoenix Cluster" because it is located in the constellation of the Phoenix, and because of its remarkable properties. The cluster is located about 5.7 billion light years from Earth. "The mythology of the Phoenix -- a bird rising from the dead -- is a great way to describe this revived object," said Michael McDonald, a Hubble Fellow in the Kavli Institute for Astrophysics and Space Research at the Massachusetts Institute of Technology and the lead author of a paper appearing in the August 16th issue of the journal Nature. "While galaxies at the center of most clusters may have been dormant for billions of years, the central galaxy in this cluster seems to have come back to life with a new burst of star formation." Like other galaxy clusters, Phoenix contains a vast reservoir of hot gas -- containing more normal matter than all of the galaxies in the cluster combined -- that can only be detected with X-ray telescopes like Chandra. The prevailing wisdom had once been that this hot gas should cool over time and sink to the galaxy at the center of the cluster, forming huge numbers of stars. However, most galaxy clusters have formed very few stars over the last few billion years. Astronomers think that the supermassive black hole in the central galaxy of clusters pumps energy into the system, preventing cooling of gas from causing a burst of star formation. The famous Perseus Cluster is an example of a black hole bellowing out energy and preventing the gas from cooling to form stars at a high rate. Repeated outbursts from the black hole in the center of Perseus, in the form of powerful jets, created giant cavities and produced sound waves with an incredibly deep B-flat note 57 octaves below middle C. "We thought that these very deep sounds might be found in galaxy clusters everywhere," said co-author Ryan Foley, a Clay Fellow at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. "The Phoenix Cluster is showing us this is not the case - or at least there are times the music stops. Jets from the giant black hole at the center of a cluster are apparently not powerful enough to prevent the cluster gas from cooling.” With its black hole not producing powerful enough jets, the center of the Phoenix Cluster is buzzing with stars that are forming about 20 times faster than in the Perseus cluster. This rate is the highest seen in the center of a galaxy cluster but not the highest seen anywhere in the Universe. However, the overall record-holding galaxies, located outside clusters, have rates only about twice as high. The frenetic pace of star birth and cooling of gas in Phoenix are causing both the galaxy and the black hole to add mass very quickly -- an important phase that the researchers predict will be relatively short-lived. "The galaxy and its black hole are undergoing unsustainable growth," said co-author Bradford Benson, a Kavli Fellow in the Kavli Institute for Cosmological Physics at the University of Chicago. "This growth spurt can't last longer than about a hundred million years, otherwise the galaxy and black hole would become much bigger than their counterparts in the nearby Universe." Remarkably, the Phoenix Cluster and its central galaxy and supermassive black hole are already among the most massive known objects of their type. Because of their tremendous size, galaxy clusters are crucial objects for studying cosmology and galaxy evolution and so finding one with such extreme properties like the Phoenix Cluster is important. "This spectacular star burst is a very significant discovery because it suggests we have to rethink how the massive galaxies in the centers of clusters grow," said Martin Rees of Cambridge University, who was not involved with the study. "The cooling of hot gas might be a much more important source of stars than previously thought." The Phoenix Cluster was originally detected by the National Science Foundation's South Pole Telescope, and later was observed in optical light by the Gemini Observatory in Chile as well as the Blanco 4-meter and Magellan telescopes, also in Chile. The hot gas and its rate of cooling were estimated from Chandra data. To measure the star formation rate in the Phoenix Cluster, several space-based telescopes were used including NASA's WISE and GALEX, and ESA's Herschel. NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass.
http://www.kavlifoundation.org/kavli-news/NASA-phoenix-cluster-sets-record-pace-forming-stars
4.125
Romanesque architecture is an architectural style of medieval Europe characterized by semi-circular arches. There is no consensus for the beginning date of the Romanesque style, with proposals ranging from the 6th to the late 10th century, this later date being the most commonly held. It developed in the 12th century into the Gothic style, marked by pointed arches. Examples of Romanesque architecture can be found across the continent, making it the first pan-European architectural style since Imperial Roman Architecture. The Romanesque style in England is traditionally referred to as Norman architecture. Combining features of ancient Roman and Byzantine buildings and other local traditions, Romanesque architecture is known by its massive quality, thick walls, round arches, sturdy pillars, groin vaults, large towers and decorative arcading. Each building has clearly defined forms, frequently of very regular, symmetrical plan; the overall appearance is one of simplicity when compared with the Gothic buildings that were to follow. The style can be identified right across Europe, despite regional characteristics and different materials. Many castles were built during this period, but they are greatly outnumbered by churches. The most significant are the great abbey churches, many of which are still standing, more or less complete and frequently in use. The enormous quantity of churches built in the Romanesque period was succeeded by the still busier period of Gothic architecture, which partly or entirely rebuilt most Romanesque churches in prosperous areas like England and Portugal. The largest groups of Romanesque survivors are in areas that were less prosperous in subsequent periods, including parts of southern France, northern Spain and rural Italy. Survivals of unfortified Romanesque secular houses and palaces, and the domestic quarters of monasteries are far rarer, but these used and adapted the features found in church buildings, on a domestic scale. - 1 Definition - 2 Scope - 3 History - 4 Characteristics - 5 Ecclesiastical architecture - 5.1 Plan - 5.2 Section - 5.3 Church and cathedral east ends - 5.4 Church and cathedral façades and external decoration - 5.5 Church towers - 5.6 Portals - 5.7 Interiors - 5.8 Other structures - 5.9 Decoration - 5.10 Transitional style and the continued use of Romanesque forms - 6 Romanesque castles, houses and other buildings - 7 Romanesque Revival - 8 Notes - 9 See also - 10 References - 11 Further reading - 12 External links According to the Oxford English Dictionary, the word "Romanesque" means "descended from Roman" and was first used in English to designate what are now called Romance languages (first cited 1715). The French term "romane" was first used in the architectural sense by archaeologist Charles de Gerville in a letter of 18 December 1818 to Auguste Le Prévost to describe what Gerville sees as a debased Roman architecture.[Notes 2] In 1824 Gerville's friend Arcisse de Caumont adopted the label "roman" to describe the "degraded" European architecture from the 5th to the 13th centuries, in his Essai sur l'architecture religieuse du moyen-âge, particulièrement en Normandie, at a time when the actual dates of many of the buildings so described had not been ascertained: The name Roman (esque) we give to this architecture, which should be universal as it is the same everywhere with slight local differences, also has the merit of indicating its origin and is not new since it is used already to describe the language of the same period. Romance language is degenerated Latin language. Romanesque architecture is debased Roman architecture. The first use in a published work is in William Gunn's An Inquiry into the Origin and Influence of Gothic Architecture (London 1819). The word was used by Gunn to describe the style that was identifiably Medieval and prefigured the Gothic, yet maintained the rounded Roman arch and thus appeared to be a continuation of the Roman tradition of building. The term is now used for the more restricted period from the late 10th to 12th centuries. The term "Pre-romanesque" is sometimes applied to architecture in Germany of the Carolingian and Ottonian periods and Visigothic, Mozarab and Asturian constructions between the 8th and the 10th centuries in the Iberian Peninsula while "First Romanesque" is applied to buildings in north of Italy and Spain and parts of France that have Romanesque features but pre-date the influence of the Abbey of Cluny. Portal, Church of Santa Maria, Viu de Llevata, Catalonia, Spain Cloister of the Basilica di San Giovanni in Laterano, Rome Bell tower of Angoulême Cathedral, Charente, SW France Window and Lombard band of the Rotunda of San Tomè, Almenno San Bartolomeo Buildings of every type were constructed in the Romanesque style, with evidence remaining of simple domestic buildings, elegant town houses, grand palaces, commercial premises, civic buildings, castles, city walls, bridges, village churches, abbey churches, abbey complexes and large cathedrals. Of these types of buildings, domestic and commercial buildings are the most rare, with only a handful of survivors in the United Kingdom, several clusters in France, isolated buildings across Europe and by far the largest number, often unidentified and altered over the centuries, in Italy. Many castles exist, the foundations of which date from the Romanesque period. Most have been substantially altered, and many are in ruins. By far the greatest number of surviving Romanesque buildings are churches. These range from tiny chapels to large cathedrals, and although many have been extended and altered in different styles, a large number remain either substantially intact or sympathetically restored, demonstrating the form, character and decoration of Romanesque church architecture. Saint Nicholas Rotunda in Cieszyn, Poland The Civic Hall in Massa Marittima, Italy Abbey Church of St James, Lebeny, Hungary (1208) The keep of Conisbrough Castle, England. Romanesque architecture was the first distinctive style to spread across Europe since the Roman Empire. With the decline of Rome, Roman building methods survived to an extent in Western Europe, where successive Merovingian, Carolingian and Ottonian architects continued to build large stone buildings such as monastery churches and palaces. In the more northern countries Roman building styles and techniques had never been adopted except for official buildings, while in Scandinavia they were unknown. Although the round arch continued in use, the engineering skills required to vault large spaces and build large domes were lost. There was a loss of stylistic continuity, particularly apparent in the decline of the formal vocabulary of the Classical Orders. In Rome several great Constantinian basilicas continued in use as an inspiration to later builders. Some traditions of Roman architecture also survived in Byzantine architecture with the 6th-century octagonal Byzantine Basilica of San Vitale in Ravenna being the inspiration for the greatest building of the Dark Ages in Europe, the Emperor Charlemagne's Palatine Chapel, Aachen, Germany, built around the year AD 800. Dating shortly after the Palatine Chapel is a remarkable 9th-century Swiss manuscript known as the Plan of Saint Gall and showing a very detailed plan of a monastic complex, with all its various monastic buildings and their functions labelled. The largest building is the church, the plan of which is distinctly Germanic, having an apse at both ends, an arrangement not generally seen elsewhere. Another feature of the church is its regular proportion, the square plan of the crossing tower providing a module for the rest of the plan. These features can both be seen at the Proto-Romanesque St. Michael's Church, Hildesheim, 1001–1030. Architecture of a Romanesque style also developed simultaneously in the north of Italy, parts of France and in the Iberian Peninsula in the 10th century and prior to the later influence of the Abbey of Cluny. The style, sometimes called First Romanesque or Lombard Romanesque, is characterised by thick walls, lack of sculpture and the presence of rhythmic ornamental arches known as a Lombard band. Santa Maria in Cosmedin, Rome (8th – early 12th century) has a basilical plan and reuses ancient Roman columns. St. Michael's Church, Hildesheim has similar characteristics to the church in the Plan of Saint Gall. Charlemagne was crowned by the Pope in Old St. Peter's Basilica on Christmas Day in the year 800, with an aim to re-establishing the old Roman Empire. Charlemagne's political successors continued to rule much of Europe, with a gradual emergence of the separate political states that were eventually to become welded into nations, either by allegiance or defeat, the Kingdom of Germany giving rise to the Holy Roman Empire. The invasion of England by William, Duke of Normandy, in 1066, saw the building of both castles and churches that reinforced the Norman presence. Several significant churches that were built at this time were founded by rulers as seats of temporal and religious power, or places of coronation and burial. These include the Abbaye-Saint-Denis, Speyer Cathedral and Westminster Abbey (where little of the Norman church now remains). At a time when the remaining architectural structures of the Roman Empire were falling into decay and much of its learning and technology lost, the building of masonry domes and the carving of decorative architectural details continued unabated, though greatly evolved in style since the fall of Rome, in the enduring Byzantine Empire. The domed churches of Constantinople and Eastern Europe were to greatly affect the architecture of certain towns, particularly through trade and through the Crusades. The most notable single building that demonstrates this is St Mark's Basilica, Venice, but there are many lesser-known examples, particularly in France, such as the church of Saint-Front, Périgueux and Angoulême Cathedral. Much of Europe was affected by feudalism in which peasants held tenure from local rulers over the land that they farmed in exchange for military service. The result of this was that they could be called upon, not only for local and regional spats, but to follow their lord to travel across Europe to the Crusades, if they were required to do so. The Crusades, 1095–1270, brought about a very large movement of people and, with them, ideas and trade skills, particularly those involved in the building of fortifications and the metal working needed for the provision of arms, which was also applied to the fitting and decoration of buildings. The continual movement of people, rulers, nobles, bishops, abbots, craftsmen and peasants, was an important factor in creating a homogeneity in building methods and a recognizable Romanesque style, despite regional differences. Life became generally less secure after the Carolingian period. This resulted in the building of castles at strategic points, many of them being constructed as strongholds of the Normans, descendants of the Vikings who invaded northern France under Rollo in 911. Political struggles also resulted in the fortification of many towns, or the rebuilding and strengthening of walls that remained from the Roman period. One of the most notable surviving fortifications is that of the city of Carcassonne. The enclosure of towns brought about a lack of living space within the walls, and resulted in a style of town house that was tall and narrow, often surrounding communal courtyards, as at San Gimignano in Tuscany. In Germany, the Holy Roman Emperors built a number of residences, fortified, but essentially palaces rather than castles, at strategic points and on trade routes. The Imperial Palace of Goslar (heavily restored in the 19th century) was built in the early 11th century by Otto III and Henry III, while the ruined Palace at Gelnhausen was received by Frederick Barbarossa prior to 1170. The movement of people and armies also brought about the building of bridges, some of which have survived, including the 12th-century bridge at Besalú, Catalonia, the 11th-century Puente de la Reina, Navarre and the Pont-Saint-Bénézet, Avignon. Across Europe, the late 11th and 12th centuries saw an unprecedented growth in the number of churches. A great number of these buildings, both large and small, remain, some almost intact and in others altered almost beyond recognition in later centuries. They include many very well known churches such as Santa Maria in Cosmedin in Rome, the Baptistery in Florence and San Zeno Maggiore in Verona. In France, the famous abbeys of Aux Dames and Les Hommes at Caen and Mont Saint-Michel date from this period, as well as the abbeys of the pilgrimage route to Santiago de Compostela. Many cathedrals owe their foundation to this date, with others beginning as abbey churches, and later becoming cathedrals. In England, of the cathedrals of ancient foundation, all were begun in this period with the exception of Salisbury, where the monks relocated from the Norman church at Old Sarum, and several, such as Canterbury, which were rebuilt on the site of Saxon churches. In Spain, the most famous church of the period is Santiago de Compostela. In Germany, the Rhine and its tributaries were the location of many Romanesque abbeys, notably Mainz, Worms, Speyer and Bamberg. In Cologne, then the largest city north of the Alps, a very important group of large city churches survives largely intact. As monasticism spread across Europe, Romanesque churches sprang up in Scotland, Scandinavia, Poland, Hungary, Sicily, Serbia and Tunisia. Several important Romanesque churches were built in the Crusader kingdoms. The system of monasticism in which the religious become members of an order, with common ties and a common rule, living in a mutually dependent community, rather than as a group of hermits living in proximity but essentially separate, was established by the monk Benedict in the 6th century. The Benedictine monasteries spread from Italy throughout Europe, being always by far the most numerous in England. They were followed by the Cluniac order, the Cistercians, Carthusians and Augustinian Canons. During the Crusades, the military orders of the Knights Hospitaller and the Knights Templar were founded. The monasteries, which sometimes also functioned as cathedrals, and the cathedrals that had bodies of secular clergy often living in community, were a major source of power in Europe. Bishops and the abbots of important monasteries lived and functioned like princes. The monasteries were the major seats of learning of all sorts. Benedict had ordered that all the arts were to be taught and practiced in the monasteries. Within the monasteries books were transcribed by hand, and few people outside the monasteries could read or write. In France, Burgundy was the centre of monasticism. The enormous and powerful monastery at Cluny was to have lasting effect on the layout of other monasteries and the design of their churches. Unfortunately, very little of the abbey church at Cluny remains; the "Cluny II" rebuilding of 963 onwards has completely vanished, but we have a good idea of the design of "Cluny III" from 1088 to 1130, which until the Renaissance remained the largest building in Europe. However, the church of St. Sernin at Toulouse, 1080–1120, has remained intact and demonstrates the regularity of Romanesque design with its modular form, its massive appearance and the repetition of the simple arched window motif. Many parish churches across Europe, such as this in Vestre Slidre, Norway, are of Romanesque foundation Many cathedrals such as Trier Cathedral, Germany, date from this period, with many later additions. Pilgrimage and Crusade One of the effects of the Crusades, which were intended to wrest the Holy Places of Palestine from Islamic control, was to excite a great deal of religious fervour, which in turn inspired great building programs. The Nobility of Europe, upon safe return, thanked God by the building of a new church or the enhancement of an old one. Likewise, those who did not return from the Crusades could be suitably commemorated by their family in a work of stone and mortar. The Crusades resulted in the transfer of, among other things, a great number of Holy Relics of saints and apostles. Many churches, like Saint-Front, Périgueux, had their own home grown saint while others, most notably Santiago de Compostela, claimed the remains and the patronage of a powerful saint, in this case one of the Twelve Apostles. Santiago de Compostela, located near Galicia (present day Spain) became one of the most important pilgrimage destinations in Europe. Most of the pilgrims travelled the Way of St. James on foot, many of them barefooted as a sign of penance. They moved along one of the four main routes that passed through France, congregating for the journey at Jumièges, Paris, Vézelay, Cluny, Arles and St. Gall in Switzerland. They crossed two passes in the Pyrenees and converged into a single stream to traverse north-western Spain. Along the route they were urged on by those pilgrims returning from the journey. On each of the routes abbeys such as those at Moissac, Toulouse, Roncesvalles, Conques, Limoges and Burgos catered for the flow of people and grew wealthy from the passing trade. Saint-Benoît-du-Sault, in the Berry province, is typical of the churches that were founded on the pilgrim route. The general impression given by Romanesque architecture, in both ecclesiastical and secular buildings, is one of massive solidity and strength. In contrast with both the preceding Roman and later Gothic architecture, in which the load-bearing structural members are, or appear to be, columns, pilasters and arches, Romanesque architecture, in common with Byzantine architecture, relies upon its walls, or sections of walls called piers. Romanesque architecture is often divided into two periods known as the "First Romanesque" style and the "Romanesque" style. The difference is chiefly a matter of the expertise with which the buildings were constructed. The First Romanesque employed rubble walls, smaller windows and unvaulted roofs. A greater refinement marks the Second Romanesque, along with increased use of the vault and dressed stone. The walls of Romanesque buildings are often of massive thickness with few and comparatively small openings. They are often double shells, filled with rubble. The building material differs greatly across Europe, depending upon the local stone and building traditions. In Italy, Poland, much of Germany and parts of the Netherlands, brick is generally used. Other areas saw extensive use of limestone, granite and flint. The building stone was often used in comparatively small and irregular pieces, bedded in thick mortar. Smooth ashlar masonry was not a distinguishing feature of the style, particularly in the earlier part of the period, but occurred chiefly where easily worked limestone was available. Because of the massive nature of Romanesque walls, buttresses are not a highly significant feature, as they are in Gothic architecture. Romanesque buttresses are generally of flat square profile and do not project a great deal beyond the wall. In the case of aisled churches, barrel vaults, or half-barrel vaults over the aisles helped to buttress the nave, if it was vaulted. In the cases where half-barrel vaults were used, they effectively became like flying buttresses. Often aisles extended through two storeys, rather than the one usual in Gothic architecture, so as to better support the weight of a vaulted nave. In the case of Durham Cathedral, flying buttresses have been employed, but are hidden inside the triforium gallery. Castle Rising, England, shows flat buttresses and reinforcing at the corners of the building typical in both castles and churches. Abbaye Cerisy le Foret, Normandy, France, has a compact appearance with aisles rising through two storeys buttressing the vault. St Albans Cathedral England, demonstrates the typical alterations made to the fabric of many Romanesque buildings in different styles and materials Arches and openings The arches used in Romanesque architecture are nearly always semicircular, for openings such as doors and windows, for vaults and for arcades. Wide doorways are usually surmounted by a semi-circular arch, except where a door with a lintel is set into a large arched recess and surmounted by a semi-circular "lunette" with decorative carving. These doors sometimes have a carved central jamb. Narrow doors and small windows might be surmounted by a solid stone lintel. Larger openings are nearly always arched. A characteristic feature of Romanesque architecture, both ecclesiastic and domestic, is the pairing of two arched windows or arcade openings, separated by a pillar or colonette and often set within a larger arch. Ocular windows are common in Italy, particularly in the facade gable and are also seen in Germany. Later Romanesque churches may have wheel windows or rose windows with plate tracery. There are a very small number of buildings in the Romanesque style, such as Autun Cathedral in France and Monreale Cathedral in Sicily in which pointed arches have been used extensively, apparently for stylistic reasons. It is believed that in these cases there is a direct imitation of Islamic architecture. At other late Romanesque churches such as Durham Cathedral, and Cefalù Cathedral, the pointed arch was introduced as a structural device in ribbed vaulting. Its increasing application was fundamental to the development of Gothic architecture. An arcade is a row of arches, supported on piers or columns. They occur in the interior of large churches, separating the nave from the aisles, and in large secular interiors spaces, such as the great hall of a castle, supporting the timbers of a roof or upper floor. Arcades also occur in cloisters and atriums, enclosing an open space. Arcades can occur in storeys or stages. While the arcade of a cloister is typically of a single stage, the arcade that divides the nave and aisles in a church is typically of two stages, with a third stage of window openings known as the clerestory rising above them. Arcading on a large scale generally fulfils a structural purpose, but it is also used, generally on a smaller scale, as a decorative feature, both internally and externally where it is frequently "blind arcading" with only a wall or a narrow passage behind it. The facade of Notre Dame du Puy, le Puy en Velay, France, has a more complex arrangement of diversified arches: Doors of varying widths, blind arcading, windows and open arcades. Collegiate Church of Saint Gertrude, Nivelles, Belgium uses fine shafts of Belgian marble to define alternating blind openings and windows. Upper windows are similarly separated into two openings by colonettes. Worms Cathedral, Germany, displays a great variety of openings and arcades including wheel and rose windows, many small simple windows, galleries and Lombard courses. The south portal of the Abbey of Saint-Pierre, Moissac, France, has a square door divided by an ornate doorpost, surmounted by a carved tympanum and set within a vast arched porch. In Romanesque architecture, piers were often employed to support arches. They were built of masonry and square or rectangular in section, generally having a horizontal moulding representing a capital at the springing of the arch. Sometimes piers have vertical shafts attached to them, and may also have horizontal mouldings at the level of the base. Although basically rectangular, piers can often be of highly complex form, with half-segments of large hollow-core columns on the inner surface supporting the arch, or a clustered group of smaller shafts leading into the mouldings of the arch. Piers that occur at the intersection of two large arches, such as those under the crossing of the nave and transept, are commonly cruciform in shape, each arch having its own supporting rectangular pier at right angles to the other. Columns are an important structural feature of Romanesque architecture. Colonnettes and attached shafts are also used structurally and for decoration. Monolithic columns cut from a single piece of stone were frequently used in Italy, as they had been in Roman and Early Christian architecture. They were also used, particularly in Germany, when they alternated between more massive piers. Arcades of columns cut from single pieces are also common in structures that do not bear massive weights of masonry, such as cloisters, where they are sometimes paired. In Italy, during this period, a great number of antique Roman columns were salvaged and reused in the interiors and on the porticos of churches. The most durable of these columns are of marble and have the stone horizontally bedded. The majority are vertically bedded and are sometimes of a variety of colours. They may have retained their original Roman capitals, generally of the Corinthian or Roman Composite style. Some buildings, like Santa Maria in Cosmedin (illustrated above) and the atrium at San Clemente in Rome, may have an odd assortment of columns in which large capitals are placed on short columns and small capitals are placed on taller columns to even the height. Architectural compromises of this type are seen where materials have been salvaged from a number of buildings. Salvaged columns were also used to a lesser extent in France. In most parts of Europe, Romanesque columns were massive, as they supported thick upper walls with small windows, and sometimes heavy vaults. The most common method of construction was to build them out of stone cylinders called drums, as in the crypt at Speyer Cathedral. Hollow core columns Where really massive columns were called for, such as those at Durham Cathedral, they were constructed of ashlar masonry and the hollow core was filled with rubble. These huge untapered columns are sometimes ornamented with incised decorations. A common characteristic of Romanesque buildings, occurring both in churches and in the arcades that separate large interior spaces of castles, is the alternation of piers and columns. The most simple form that this takes is to have a column between each adjoining pier. Sometimes the columns are in multiples of two or three. At St. Michael's, Hildesheim, an A B B A alternation occurs in the nave while an A B A alternation can be seen in the transepts. At Jumièges there are tall drum columns between piers each of which has a half-column supporting the arch. There are many variations on this theme, most notably at Durham Cathedral where the mouldings and shafts of the piers are of exceptional richness and the huge masonry columns are deeply incised with geometric patterns. Often the arrangement was made more complex by the complexity of the piers themselves, so that it was not piers and columns that alternated, but rather, piers of entirely different form from each other, such as those of Sant' Ambrogio, Milan, where the nature of the vault dictated that the alternate piers bore a great deal more weight than the intermediate ones and are thus very much larger. Mainz Cathedral, Germany, has rectangular piers and possibly the earliest example of an internal elevation of 3 stages. (Gothic vault) Malmesbury Abbey, England, has hollow core columns, probably filled with rubble. (Gothic vault) The cathedral of Santiago de Compostela, Spain, has large drum columns with attached shafts supporting a barrel vault. Durham Cathedral, England, has decorated masonry columns alternating with piers of clustered shafts supporting the earliest pointed high ribs. The foliate Corinthian style provided the inspiration for many Romanesque capitals, and the accuracy with which they were carved depended very much on the availability of original models, those in Italian churches such as Pisa Cathedral or church of Sant'Alessandro in Lucca and southern France being much closer to the Classical than those in England. The Corinthian capital is essentially round at the bottom where it sits on a circular column and square at the top, where it supports the wall or arch. This form of capital was maintained in the general proportions and outline of the Romanesque capital. This was achieved most simply by cutting a rectangular cube and taking the four lower corners off at an angle so that the block was square at the top, but octagonal at the bottom, as can be seen at St. Michael's Hildesheim. This shape lent itself to a wide variety of superficial treatments, sometimes foliate in imitation of the source, but often figurative. In Northern Europe the foliate capitals generally bear far more resemblance to the intricacies of manuscript illumination than to Classical sources. In parts of France and Italy there are strong links to the pierced capitals of Byzantine architecture. It is in the figurative capitals that the greatest originality is shown. While some are dependent on manuscripts illustrations of Biblical scenes and depictions of beasts and monsters, others are lively scenes of the legends of local saints. The capitals, while retaining the form of a square top and a round bottom, were often compressed into little more than a bulging cushion-shape. This is particularly the case on large masonry columns, or on large columns that alternate with piers as at Durham.(See illustrated above) Capital of Corinthian form with anthropomorphised details, Pisa Campanile Capital of Corinthian form with Byzantine decoration and carved dosseret, San Martín de Tours, Palencia Capital of amorphous form surmounting a cluster of shafts. The figurative carving shows a winged devil directing Herod to slaughter the Innocents. Monastery of San Juan de Duero, Soria Vaults and roofs The majority of buildings have wooden roofs, generally of a simple truss, tie beam or king post form. In the case of trussed rafter roofs, they are sometimes lined with wooden ceilings in three sections like those that survive at Ely and Peterborough cathedrals in England. In churches, typically the aisles are vaulted, but the nave is roofed with timber, as is the case at both Peterborough and Ely. In Italy where open wooden roofs are common, and tie beams frequently occur in conjunction with vaults, the timbers have often been decorated as at San Miniato al Monte, Florence. Vaults of stone or brick took on several different forms and showed marked development during the period, evolving into the pointed ribbed arch characteristic of Gothic architecture. The simplest type of vaulted roof is the barrel vault in which a single arched surface extends from wall to wall, the length of the space to be vaulted, for example, the nave of a church. An important example, which retains Medieval paintings, is the vault of Saint-Savin-sur-Gartempe, France, of the early 12th century. However, the barrel vault generally required the support of solid walls, or walls in which the windows were very small. Groin vaults occur in early Romanesque buildings, notably at Speyer Cathedral where the high vault of about 1060 is the first employment in Romanesque architecture of this type of vault for a wide nave. In later buildings employing ribbed vaultings, groin vaults are most frequently used for the less visible and smaller vaults, particularly in crypts and aisles. A groin vault is almost always square in plan and is constructed of two barrel vaults intersecting at right angles. Unlike a ribbed vault, the entire arch is a structural member. Groin vaults are frequently separated by transverse arched ribs of low profile as at Speyer and Santiago de Compostela. At Sainte Marie Madeleine, Vézelay, the ribs are square in section, strongly projecting and polychrome. Ribbed vaults came into general use in the 12th century. In ribbed vaults, not only are there ribs spanning the vaulted area transversely, but each vaulted bay has diagonal ribs, following the same course as the groins in a groin vault. However, whereas in a groin vault, the vault itself is the structural member, in a ribbed vault, it is the ribs that are the structural members, and the spaces between them can be filled with lighter, non-structural material. Because Romanesque arches are nearly always semi-circular, the structural and design problem inherent in the ribbed vault is that the diagonal span is larger and therefore higher than the transverse span. The Romanesque builders used a number of solutions to this problem. One was to have the centre point where the diagonal ribs met as the highest point, with the infill of all the surfaces sloping upwards towards it, in a domical manner. This solution was employed in Italy at San Michele, Pavia, and Sant' Ambrogio, Milan. The solution employed in England was to stilt the transverse ribs, maintaining a horizontal central line to the roof like that of a barrel vault. The diagonal ribs could also be depressed, a solution used on the sexpartite vaults at both the Saint-Étienne, (Abbaye-aux-Hommes) and Sainte-Trinité, (Abbaye-les-Dames) at Caen, France, in the late 11th and early 12th centuries. Pointed arched vault The problems encountered in the structure and appearance of vaults was solved late in the Romanesque period with the introduction of pointed arched ribs which allowed the height of both diagonal and transverse ribs to be varied in proportion to each other. Pointed ribs made their first appearance in the transverse ribs of the vaults at Durham Cathedral in northern England, dating from 1128. Durham is a cathedral of massive Romanesque proportions and appearance, yet its builders introduced several structural features that were new to architectural design and were later to be hallmark features of the Gothic. Another Gothic structural feature employed at Durham is the flying buttress. However, these are hidden beneath the roofs of the aisles. The earliest pointed vault in France is that of the narthex of La Madeleine, Vézelay, dating from 1130. They were subsequently employed with the development of the Gothic style at the east end of the Basilica of St Denis in Paris in 1140. An early ribbed vault in the Romanesque architecture of Sicily is that of the chancel at the Cathedral of Cefalù. Domes in Romanesque architecture are generally found within crossing towers at the intersection of a church's nave and transept, which conceal the domes externally. Called a tiburio, this tower-like structure often has a blind arcade near the roof. Romanesque domes are typically octagonal in plan and use corner squinches to translate a square bay into a suitable octagonal base. Octagonal cloister vaults appear "in connection with basilicas almost throughout Europe" between 1050 and 1100. The precise form differs from region to region. The painted barrel vault at the Abbey Church of Saint-Savin-sur-Gartempe is supported on tall marbled columns. The nave of Lisbon Cathedral is covered by a ribbed barrel vaul and has an upper, arched gallery (triforium). The Church of St Philibert, Tournus, has a series of transverse barrel vaults supported on arches. The aisle of the Abbey Church at Mozac has a groin vault supported on transverse arches. The aisles at Peterborough Cathedral have quadripartite ribbed vaults. (The nave has an ancient painted wooden ceiling.) The ribbed vaults at Saint-Étienne, Caen, are sexpartite and span two bays of the nave. The crossing of Speyer Cathedral, Germany, has a dome on squinches. Many parish churches, abbey churches and cathedrals are in the Romanesque style, or were originally built in the Romanesque style and have subsequently undergone changes. The simplest Romanesque churches are aisless halls with a projecting apse at the chancel end, or sometimes, particularly in England, a projecting rectangular chancel with a chancel arch that might be decorated with mouldings. More ambitious churches have aisles separated from the nave by arcades. Abbey and cathedral churches generally follow the Latin Cross plan. In England, the extension eastward may be long, while in Italy it is often short or non-existent, the church being of T plan, sometimes with apses on the transept ends as well as to the east. In France the church of St Front, Périgueux, appears to have been modelled on St. Mark's Basilica, Venice, or the Byzantine Church of the Holy Apostles and is of a Greek cross plan with five domes. In the same region, Angoulême Cathedral is an aisless church of the Latin cross plan, more usual in France, but is also roofed with domes. In Germany, Romanesque churches are often of distinctive form, having apses at both east and west ends, the main entrance being central to one side. It is probable that this form came about to accommodate a baptistery at the west end. NOTE: The plans below do not show the buildings in their current states. The plan of the Abbey of St Gall, Switzerland Germany, Speyer Cathedral France, Autun Cathedral France, Angoulême Cathedral The Abbey Church of St. Gall, Switzerland, shows the plan that was to become common throughout Germanic Europe. It is a Latin Cross with a comparatively long nave and short transepts and eastern end, which is apsidal. The nave is aisled, but the chancel and transepts are not. It has an apsidal west end, which was to become a feature of Churches of Germany, such as Worms Cathedral. Speyer Cathedral, Germany, also has aisless transept and chancel. It has a markedly modular look. A typical Germanic characteristic is the presence of towers framing the chancel and the west end. There is marked emphasis on the western entrance, called Westwerk, which is seen in several other churches. Each vault compartment covers two narrow bays of the nave At Autun Cathedral, France, the pattern of the nave bays and aisles extends beyond the crossing and into the chancel, each aisle terminating in an apse. Each nave bay is separated at the vault by a transverse rib. Each transept projects to the width of two nave bays. The entrance has a narthex which screens the main portal. This type of entrance was to be elaborated in the Gothic period on the transepts at Chartres. Angoulême Cathedral, France, is one of several instances in which the Byzantine churches of Constantinople seem to have been influential in the design in which the main spaces are roofed by domes. This structure has necessitated the use of very thick walls, and massive piers from which the domes spring. There are radiating chapels around the apse, which is a typically French feature and was to evolve into the chevet. As was typically the case in England, Ely Cathedral was a Benedictine monastery, serving both monastic and secular function. To facilitate this, the chancel or "presbytery" is longer than usually found in Europe, as are the aisled transepts which contained chapels. In England, emphasis was placed on the orientation of the chapels to the east. The very large piers at the crossing signify that there was once a tower. The western end having two round towers flanking a tall central tower was unique in Britain. Ely Cathedral was never vaulted and retains a wooden ceiling over the nave. The cathedral of Santiago de Compostela shares many features with Ely, but is typically Spanish in its expansive appearance. Santiago held the body of St. James and was the most significant pilgrimage site in Europe. The narthex, the aisles, the large aisled transepts and numerous projecting chapels reflect this. The chancel is short, compared to that of Ely, and the altar set so as to provide clear view to a vast congregation simultaneously. In section, the typical aisled church or cathedral has a nave with a single aisle on either side. The nave and aisles are separated by an arcade carried on piers or on columns. The roof of the aisle and the outer walls help to buttress the upper walls and vault of the nave, if present. Above the aisle roof are a row of windows known as the clerestory, which give light to the nave. During the Romanesque period there was a development from this two-stage elevation to a three-stage elevation in which there is a gallery, known as a triforium, between the arcade and the clerestory. This varies from a simple blind arcade decorating the walls, to a narrow arcaded passage, to a fully developed second story with a row of windows lighting the gallery. This drawing is a reconstruction by Dehio of the appearance of the Romanesque Konstanz Cathedral before its alterations in the Gothic style. It has a typical elevation of nave and aisles with wooden panelled ceilings and an apsidal east end. Exterior elevation, Peterborough Cathedral Church and cathedral east ends The eastern end of a Romanesque church is almost always semi-circular, with either a high chancel surrounded by an ambulatory as in France, or a square end from which an apse projects as in Germany and Italy. Where square ends exist in English churches, they are probably influenced by Anglo Saxon churches. Peterborough and Norwich Cathedrals have retained round east ends in the French style. However, in France, simple churches without apses and with no decorative features were built by the Cistercians who also founded many houses in England, frequently in remote areas. The Cathedral of Santa Maria d'Urgell, Spain, has an apsidal east end projecting at a lower level to the choir and decorated with an arcade below the roofline. This form is usual in Italy and Germany. The Abbey of Sant'Antimo has a high apsidal end surrounded by an ambulatory and with small projecting apses Church and cathedral façades and external decoration Romanesque church facades, generally to the west end of the building, are usually symmetrical, have a large central portal made significant by its mouldings or porch, and an arrangement of arched-topped windows. In Italy there is often a single central ocular or wheel window. The common decorative feature is arcading. Smaller churches often have a single tower that is usually placed to the western end in France or England, either centrally or to one side, while larger churches and cathedrals often have two. In France, Saint-Étienne, Caen, presents the model of a large French Romanesque facade. It is a symmetrical arrangement of nave flanked by two tall towers each with two buttresses of low flat profile that divide the facade into three vertical units. The lowest stage is marked by large doors, each set within an arch in each of the three vertical sections. The wider central section has two tiers of three identical windows, while in the outer sections there are two tiers of single windows, giving emphasis to the mass of the towers. The towers rise above the facade through three further tiers, the lowest of tall blind arcading, the next of arcading pierced by two narrow windows and the third of two large windows, divided into two lights by a colonnette. This facade can be seen as the foundation for many other buildings, including both French and English Gothic churches. While the form is typical of northern France, its various components were common to many Romanesque churches of the period across Europe. Similar facades are found in Portugal. In England, Southwell Cathedral has maintained this form, despite the insertion of a huge Gothic window between the towers. Lincoln and Durham must once have looked like this. In Germany, Limburg Cathedral has a rich variety of openings and arcades in horizontal storeys of varying heights. The churches of San Zeno Maggiore, Verona, and San Michele, Pavia, present two types of facade that are typical of Italian Romanesque, that which reveals the architectural form of the building, and that which screens it. At San Zeno, the components of nave and aisles are made clear by the vertical shafts that rise to the level of the central gable and by the varying roof levels. At San Miniato al Monte the definition of the architectural parts is made even clearer by the polychrome marble, a feature of many Italian Medieval facades, particularly in Tuscany. At San Michele the vertical definition is present as at San Zeno, but the rooflines are screened behind a single large gable decorated with stepped arcading. At Santa Maria della Pieve, Arezzo, this screening is carried even further, as the roofline is horizontal and the arcading rises in many different levels while the colonettes that support them have a great diversity of decoration. In the Rhineland and Netherlands the Carolingian form of west end known as the westwerk prevailed. Towers and apse of the western end are often incorporated into a multi-storey structure that bears little structural or visual relationship to the building behind it. These westwerks take a great variety of forms as may be seen at Maria Laach Abbey, St Gertrude, Nivelles, and St Serviatius, Maastricht. The Old Cathedral of Coimbra, Portugal, is fortress-like and battlemented. The two central openings are deeply recessed. Pisa Cathedral, Italy. The entire building is faced with marble striped in white and grey. On the facade this pattern is overlaid with architectonic decoration of blind arcading below tiers of dwarf galleries. The three portals became increasingly common. Angoulême Cathedral, France. The facade here, richly decorated with architectonic and sculptural forms, has much in common with that at Empoli in that it screens the form of the building behind it. Saint-Étienne, Abbaye aux Hommes, Caen, France, 11th century, with its tall towers, three portals and neat definition of architectural forms became a model for the facades of many later cathedrals across Europe. 14th-century spires Southwell Cathedral, England, 1120, follows the Norman model with pyramidal spires as were probably at Saint-Étienne. The Perpendicular window and battlement are late Gothic. Lisbon Cathedral, Portugal, 1147, has a similar form to the Old Cathedral of Coimbra above with the addition of two sturdy bell towers in the Norman manner and a wheel window. Parma Cathedral, Italy, 1178, has a screen facade ornamented with galleries. At the centre is an open porch surmounted by a ceremonial balcony. The tower, (Gothic 1284) is a separate structure as usual in Italy. Towers were an important feature of Romanesque churches and a great number of them are still standing. They take a variety of forms: square, circular and octagonal, and are positioned differently in relation to the church building in different countries. In northern France, two large towers, such as those at Caen, were to become an integral part of the facade of any large abbey or cathedral. In central and southern France this is more variable and large churches may have one tower or a central tower. Large churches of Spain and Portugal usually have two towers. Many abbeys of France, such as that at Cluny, had many towers of varied forms. This is also common in Germany, where the apses were sometimes framed with circular towers and the crossing surmounted by an octagonal tower as at Worms Cathedral. Large paired towers of square plan could also occur on the transept ends, such as those at Tournai Cathedral in Belgium. In Germany, where four towers frequently occur, they often have spires that may be four or eight sided, or the distinctive Rhenish helm shape seen on the cathedrals of Limburg or Speyer. It is also common to see bell or onion-shaped spires of the Baroque period surmounting Romanesque towers in central and Eastern Europe. In England, for large abbeys and cathedral buildings, three towers were favoured, with the central tower being the tallest. This was often not achieved, through the slow process of the building stages, and in many cases the upper parts of the tower were not completed until centuries later as at Durham and Lincoln. Large Norman towers exist at the cathedrals of Durham, Exeter, Southwell, Norwich and Tewkesbury Abbey. Such towers were often topped during the late Medieval period with a Gothic spire of wooden construction covered with lead, copper or shingles. In the case of Norwich Cathedral, the huge, ornate, 12th-century crossing-tower received a 15th-century masonry spire rising to a height of 320 feet and remaining to this day. In Italy towers are almost always free standing and the position is often dictated by the landform of the site, rather than aesthetics. This is the case in nearly all Italian churches both large and small, except in Sicily where a number of churches were founded by the Norman rulers and are more French in appearance. As a general rule, large Romanesque towers are square with corner buttresses of low profile, rising without diminishing through the various stages. Towers are usually marked into clearly defined stages by horizontal courses. As the towers rise, the number and size of openings increases as can be seen on the right tower of the transept of Tournai Cathedral where two narrow slits in the fourth level from the top becomes a single window, then two windows, then three windows at the uppermost level. This sort of arrangement is particularly noticeable on the towers of Italian churches, which are usually built of brick and may have no other ornament. Two fine examples occur at Lucca, at the church of San Frediano and at the Duomo. It is also seen in Spain. In Italy there are a number of large free-standing towers that are circular, the most famous of these being the Leaning Tower of Pisa. In other countries where circular towers occur, such as Germany, they are usually paired and often flank an apse. Circular towers are uncommon in England, but occur throughout the Early Medieval period in Ireland. The most massive Romanesque crossing tower is that at Tewkesbury Abbey, in England, where large crossing towers are characteristic. (See pic. St Alban's Cathedral, above) The Leaning Tower of Pisa with its encircling arcades is the best known (and most richly decorated) of the many circular towers found in Italy. Romanesque churches generally have a single portal centrally placed on the west front, the focus of decoration for the facade of the building. Some churches such as Saint-Étienne, Caen, (11th century) and Pisa Cathedral (late 12th century) had three western portals, in the manner of Early Christian basilicas. Many churches, both large and small, had lateral entrances that were commonly used by worshippers. Romanesque doorways have a character form, with the jambs having a series of receding planes, into each of which is set a circular shaft, all surmounted by a continuous abacus. The semi-circular arch which rises from the abacus has the same seried planes and circular mouldings as the jambs. There are typically four planes containing three shafts, but there may be as many as twelve shafts, symbolic of the apostles. The opening of the portal may be arched, or may be set with a lintel supporting a tympanum, generally carved, but in Italy sometimes decorated with mosaic or fresco. A carved tympanum generally constitutes the major sculptural work of a Romanesque church. The subject of the carving on a major portal may be Christ in Majesty or the Last Judgement. Lateral doors may include other subjects such as the Birth of Christ. The portal may be protected by a porch, with simple open porches being typical of Italy, and more elaborate structures typical of France and Spain. The mouldings of the arched central west door of Lincoln Cathedral are decorated by chevrons and other formal and figurative ornament typical of English Norman. The "Gallery of Kings" above the portal is Gothic The Porta Platerias, Cathedral of Santiago de Compostela, by Master Esteban, has two wide openings with tympanums supported on brackets. The sculptured frieze above is protected by an eave on corbels. The structure of large churches differed regionally and developed across the centuries. The use of piers of rectangular plan to support arcades was common, as at Mainz Cathedral and St Gertrude Nivelle, and remained usual in smaller churches across Europe, with the arcades often taking the form of openings through the surface of a wall. In Italy, where there was a strong tradition of using marble columns, complete with capital, base and abacus, this remained prevalent, often reusing existent ancient columns, as at San Miniato al Monte. A number of 11th-century churches have naves distinguished by huge circular columns with no clerestory, or a very small one as at St Philibert, Tournus. In England stout columns of large diameter supported decorated arches, gallery and clerestory, as at the nave of Malmesbury Abbey (see "Piers and columns", above). By the early 12th century composite piers had evolved, in which the attached shafts swept upward to a ribbed vault or were continued into the mouldings of the arcade, as at Vézelay Abbey, Saint-Étienne, Caen, and Peterborough Cathedral. The nature of the internal roofing varied greatly, from open timber roofs, and wooden ceilings of different types, which remained common in smaller churches, to simple barrel vaults and groin vaults and increasingly to the use of ribbed vaults in the late 11th and 12th centuries, which were to become a common feature of larger abbey churches and cathedrals. A number of Romanesque churches are roofed with a series of Domes. At Fontevrault Abbey the nave is covered by four domes, while at the Church of Saint Front, Périgueux, the church is of Greek cross plan, with a central dome surrounded by four smaller domes over the nave, chancel and transepts. Internal decoration varied across Europe. Where wide expanses of wall existed, they were often plastered and painted. Wooden ceilings and timber beams were decorated. In Italy walls were sometimes faced with polychrome marble. Where buildings were constructed of stone that was suitable for carving, many decorative details occur, including ornate capitals and mouldings. The apsidal east end was often a focus of decoration, with both architectonic forms such as arcading and pictorial features such as carved figures, murals and occasionally mosaics. Stained glass came into increasing use from the 11th century. In many churches the eastern end has been rebuilt in a later style. Of England's Norman cathedrals, no eastern end remains unchanged. In France the eastern terminals of the important abbeys of Caen, Vézelay and, most significantly, the Basilica of St Denis were completely rebuilt in the Gothic style. In Germany, major reconstructions of the 19th century sought to return many Romanesque buildings to their original form. Examples of simple Romanesque apses can be seen in the images of St Gertrude, Nivelles; St Philibert, Tournus, and San Miniato al Monte. The Church of St Philibert, Tournus, (990-1019) has tall circular piers supporting the arcade and is roofed with a series of barrel vaults supported on arches. Small clerestory windows light the vault. Abbey of St Mary Magdalene, Vézelay, (consecrated 1104) has clusters of vertical shafts rising to support transverse arches and a groin vault. The dressed polychrome stonework has exquisitely detailed mouldings. East end is Gothic. The nave of Peterborough Cathedral (1118–93) in three stages of arcade, gallery & clerestory, typical of Norman abbey churches. The rare wooden ceiling retains its original decoration (c. 1230). Gothic arches beneath tower (c. 1350). Among the structures associated with church buildings are crypts, porches, chapter houses, cloisters and baptisteries. Crypts are often present as an underlying structure to a substantial church, and are generally a completely discrete space, but occasionally, as in some Italian churches, may be a sunken space under a raised chancel and open, via steps, to the body of the nave. Romanesque crypts have survived in many instances, such as Canterbury Cathedral, when the church itself has been rebuilt. The usual construction of a Romanesque crypt is with many short stout columns carrying groin vaults, as at Worcester Cathedral. Porches sometimes occur as part of the original design of a facade. This is very much the case in Italy, where they are usually only one bay deep and are supported on two columns, often resting on couchant lions, as at St Zeno, Verona.See above. Elsewhere, porches of various dates have been added to the facade or side entrance of existent churches and may be quite a substantial structure, with several bays of vaulting supported on an open or partially open arcade, and forming a sort of narthex as at the Church of St Maria, Laach.See above In Spain, Romanesque churches often have large lateral porches, like loggias. Chapter houses often occur adjacent to monastic or cathedral churches. Few have survived intact from the Romanesque period. Early chapter houses were rectangular in shape, with the larger ones sometimes having groin or ribbed vaults supported on columns. Later Romanesque chapter houses sometimes had an apsidal eastern end. The chapter house at Durham Cathedral is a wide space with a ribbed vault, restored as originally constructed in 1130. The circular chapter house at Worcester Cathedral, built by Bishop Wulfstan (1062–95), was the first circular chapter house in Europe and was much imitated in England. Cloisters are generally part of any monastic complex and also occur at cathedral and collegiate churches. They were essential to the communal way of life, a place for both working during daylight hours and relaxing during inclement weather. They usually abut the church building and are enclosed with windowless walls on the outside and an open arcade on the inside, looking over a courtyard or "cloister garth". They may be vaulted or have timber roofs. The arcades are often richly decorated and are home to some of the most fanciful carved capitals of the Romanesque period with those of Santo Domingo de Silos in Spain and the Abbey of St Pierre Moissac, being examples. Many Romanesque cloisters have survived in Spain, France, Italy and Germany, along with some of their associated buildings. Baptisteries often occur in Italy as a free standing structure, associated with a cathedral. They are generally octagonal or circular and domed. The interior may be arcaded on several levels as at Pisa Cathedral. Other notable Romanesque baptisteries are that at Parma Cathedral remarkable for its galleried exterior, and the polychrome Baptistery of San Giovanni of Florence Cathedral, with vault mosaics of the 13th century including Christ in Majesty, possibly the work of the almost legendary Coppo di Marcovaldo. The groin-vaulted crypt of Worcester Cathedral The cloister of Lavaudieu Abbey The Baptistery of Parma Cathedral Arcading is the single most significant decorative feature of Romanesque architecture. It occurs in a variety of forms, from the Lombard band, which is a row of small arches that appear to support a roofline or course, to shallow blind arcading that is often a feature of English architecture and is seen in great variety at Ely Cathedral, to the open dwarf gallery, first used at Speyer Cathedral and widely adopted in Italy as seen on both Pisa Cathedral and its famous Leaning Tower. Arcades could be used to great effect, both externally and internally, as exemplified by the church of Santa Maria della Pieve, in Arezzo. Overlapping arches form a blind arcade at St Lawrence's church Castle Rising, England. (1150) The semi-circular arches form pointed arches where they overlap, a motif which may have influenced Gothic. Dwarf galleries are a major decorative feature on the exterior of Speyer Cathedral, Germany (1090–1106), surrounding the walls and encircling the towers. This was to become a feature of Rhenish Romanesque. The eastern apse of Parma Cathedral, Italy (early 12th century) combines a diversity of decorative features: blind arcading, galleries, courses and sculptured motifs. The arcading on the facade of Lucca Cathedral, Tuscany (1204) has many variations in its decorative details, both sculptural and in the inlaid polychrome marble. Polychrome blind arcading of the apse of Monreale Cathedral, Sicily (1174–82) The decoration indicates Islamic influence in both the motifs and the fact that all the arches, including those of the windows, are pointed. The Romanesque period produced a profusion of sculptural ornamentation. This most frequently took a purely geometric form and was particularly applied to mouldings, both straight courses and the curved moldings of arches. In La Madeleine, Vezelay, for example, the polychrome ribs of the vault are all edged with narrow filets of pierced stone. Similar decoration occurs around the arches of the nave and along the horizontal course separating arcade and clerestory. Combined with the pierced carving of the capitals, this gives a delicacy and refinement to the interior. In England, such decoration could be discrete, as at Hereford and Peterborough cathedrals, or have a sense of massive energy as at Durham where the diagonal ribs of the vaults are all outlined with chevrons, the mouldings of the nave arcade are carved with several layers of the same and the huge columns are deeply incised with a variety of geometric patterns creating an impression of directional movement. These features combine to create one of the richest and most dynamic interiors of the Romanesque period. Although much sculptural ornament was sometimes applied to the interiors of churches, the focus of such decoration was generally the west front, and in particular, the portals. Chevrons and other geometric ornaments, referred to by 19th-century writers as "barbaric ornament", are most frequently found on the mouldings of the central door. Stylized foliage often appears, sometimes deeply carved and curling outward after the manner of the acanthus leaves on Corinthian capitals, but also carved in shallow relief and spiral patterns, imitating the intricacies of manuscript illuminations. In general, the style of ornament was more classical in Italy, such as that seen around the door of San Giusto in Lucca, and more "barbaric" in England, Germany and Scandinavia, such as that seen at Lincoln and Speyer Cathedrals. France produced a great range of ornament, with particularly fine interwoven and spiralling vines in the "manuscript" style occurring at Saint-Sernin, Toulouse. The portal of the Hermitage of St Segundo, Avila, has paired creatures. and decorative bands of floral and interlacing. The pairing of creatures could draw on Byzantine and Celtic models. On these mouldings around the portal of Lincoln Cathedral are formal chevron ornament, tongue-poking monsters, vines and figures, and symmetrical motifs. St Martin's Church, Gensac-la-Pallue has capitals with elaborate interlacing. With the fall of the Roman Empire, the tradition of carving large works in stone and sculpting figures in bronze died out. The best-known surviving large sculptural work of Proto-Romanesque Europe is the life-size wooden Crucifix commissioned by Archbishop Gero of Cologne in about 960–65. During the 11th and 12th centuries, figurative sculpture flourished in a distinctly Romanesque style that can be recognised across Europe, although the most spectacular sculptural projects are concentrated in South-Western France, Northern Spain and Italy. Major figurative decoration occurs particularly around the portals of cathedrals and churches, ornamenting the tympanum, lintels, jambs and central posts. The tympanum is typically decorated with the imagery of Christ in Majesty with the symbols of the Four Evangelists, drawn directly from the gilt covers of medieval Gospel Books. This style of doorway occurs in many places and continued into the Gothic period. A rare survival in England is that of the "Prior's Door" at Ely Cathedral. In France, many have survived, with impressive examples at the Abbey of Saint-Pierre, Moissac, the Abbey of Sainte-Marie, Souillac, and Abbey of la Madaleine, Vézelay – all daughter houses of Cluny, with extensive other sculpture remaining in cloisters and other buildings. Nearby, Autun Cathedral has a Last Judgement of great rarity in that it has uniquely been signed by its creator Giselbertus (who was perhaps the patron rather than the sculptor). The same artist is thought to have worked at la Madaleine Vezelay which uniquely has two elaborately carved tympanum, the early inner one representing the Last Judgement and that on the outer portal of the narthex representing Jesus sending forth the Apostles to preach to the nations. It is a feature of Romanesque art, both in manuscript illumination and sculptural decoration, that figures are contorted to fit the space that they occupy. Among the many examples that exist, one of the finest is the figure of the Prophet Jeremiah from the pillar of the portal of the Abbey of Saint-Pierre, Moissac, France, from about 1130. A significant motif of Romanesque design is the spiral, a form applied to both plant motifs and drapery in Romanesque sculpture. An outstanding example of its use in drapery is that of the central figure of Christ on the outer portal at La Madaleine, Vezelay. Many of the smaller sculptural works, particularly capitals, are Biblical in subject and include scenes of Creation and the Fall of Man, episodes from the life of Christ and those Old Testament scenes that prefigure his Death and Resurrection, such as Jonah and the Whale and Daniel in the lions' den. Many Nativity scenes occur, the theme of the Three Kings being particularly popular. The cloisters of Santo Domingo de Silos Abbey in Northern Spain, and Moissac are fine examples surviving complete. Details of the portal of Oloron Cathedral show a demon, a lion swallowing a man and kings with musical instruments. A relief from St Trophime, Arles, showing King Herod and the Three Kings, follows the conventions in that the seated Herod is much larger than the standing figures. Notre-Dame-en-Vaux, Châlons-en-Champagne. This paired capital representing Christ washing the feet of the disciples is lively and naturalistic. The large wall surfaces and plain curving vaults of the Romanesque period lent themselves to mural decoration. Unfortunately, many of these early wall paintings have been destroyed by damp or the walls have been replastered and painted over. In most of Northern Europe such pictures were systematically destroyed in bouts of Reformation iconoclasm. In other countries they have suffered from war, neglect and changing fashion. A classic scheme for the full painted decoration of a church, derived from earlier examples often in mosaic, had, as its focal point in the semi-dome of the apse, Christ in Majesty or Christ the Redeemer enthroned within a mandorla and framed by the four winged beasts, symbols of the Four Evangelists, comparing directly with examples from the gilt covers or the illuminations of Gospel Books of the period. If the Virgin Mary was the dedicatee of the church, she might replace Christ here. On the apse walls below would be saints and apostles, perhaps including narrative scenes, for example of the saint to whom the church was dedicated. On the sanctuary arch were figures of apostles, prophets or the twenty-four "elders of the Apocalypse", looking in towards a bust of Christ, or his symbol the Lamb, at the top of the arch. The north wall of the nave would contain narrative scenes from the Old Testament, and the south wall from the New Testament. On the rear west wall would be a Doom painting or Last Judgement, with an enthroned and judging Christ at the top. One of the most intact schemes to exist is that at Saint-Savin-sur-Gartempe in France. (See picture above under "Vault") The long barrel vault of the nave provides an excellent surface for fresco, and is decorated with scenes of the Old Testament, showing the Creation, the Fall of Man and other stories including a lively depiction of Noah's Ark complete with a fearsome figurehead and numerous windows through with can be seen the Noah and his family on the upper deck, birds on the middle deck, while on the lower are the pairs of animals. Another scene shows with great vigour the swamping of Pharaoh's army by the Red Sea. The scheme extends to other parts of the church, with the martyrdom of the local saints shown in the crypt, and Apocalypse in the narthex and Christ in Majesty. The range of colours employed is limited to light blue-green, yellow ochre, reddish brown and black. Similar paintings exist in Serbia, Spain, Germany, Italy and elsewhere in France. A frieze of figures occupies the zone below the semi-dome in the apse. Abbey of St Pere of Burgal, Catalonia, Spain In England the major pictorial theme occurs above the chancel arch in parish churches. St John the Baptist, Clayton, Sussex The oldest-known fragments of medieval pictorial stained glass appear to date from the 10th century. The earliest intact figures are five prophet windows at Augsburg, dating from the late 11th century. The figures, though stiff and formalised, demonstrate considerable proficiency in design, both pictorially and in the functional use of the glass, indicating that their maker was well accustomed to the medium. At Canterbury and Chartres Cathedrals, a number of panels of the 12th century have survived, including, at Canterbury, a figure of Adam digging, and another of his son Seth from a series of Ancestors of Christ. Adam represents a highly naturalistic and lively portrayal, while in the figure of Seth, the robes have been used to great decorative effect, similar to the best stone carving of the period. Many of the magnificent stained glass windows of France, including the famous windows of Chartres, date from the 13th century. Far fewer large windows remain intact from the 12th century. One such is the Crucifixion of Poitiers, a remarkable composition that rises through three stages, the lowest with a quatrefoil depicting the Martyrdom of St Peter, the largest central stage dominated by the crucifixion and the upper stage showing the Ascension of Christ in a mandorla. The figure of the crucified Christ is already showing the Gothic curve. The window is described by George Seddon as being of "unforgettable beauty". King David from Augsburg Cathedral, late 11th century. One of a series of prophets that are the oldest stained glass windows in situ. Two panels of lively figures, Seth and Adam from the 12th-century Ancestors of Christ, Canterbury Cathedral, now set into a Perpendicular Gothic window with panels of many different dates. King Otto II from a series of Emperors (12th and 13th centuries) The panels are now set into Gothic windows, Strasbourg Cathedral Detail of a small panel showing Kings David and Solomon set in an architectonic frame from a large window at Strasbourg. Late 12th century. The alternation of red and blue is a typical device of simpler window designs. It is approximately 1/3 the height, and is much less complex in execution than the Emperor series of which Otto II is a part.See left Transitional style and the continued use of Romanesque forms During the 12th century, features that were to become typical of Gothic architecture began to appear. It is not uncommon, for example, for a part of building that has been constructed over a lengthy period extending into the 12th century, to have very similar arcading of both semi-circular and pointed shape, or windows that are identical in height and width, but in which the later ones are pointed. This can be seen on the towers of Tournai Cathedral and on the western towers and facade at Ely Cathedral. Other variations that appear to hover between Romanesque and Gothic occur, such as the facade designed by Abbot Suger at the Abbey of Saint-Denis, which retains much that is Romanesque in its appearance, and the Facade of Laon Cathedral, which, despite its Gothic form, has round arches. Abbot Suger's innovative choir of the Abbey of Saint-Denis, 1140–44, led to the adoption of the Gothic style by Paris and its surrounding area, but other parts of France were slower to take it up, and provincial churches continued to be built in the heavy manner and rubble stone of the Romanesque, even when the openings were treated with the fashionable pointed arch. In England, the Romanesque groundplan, which in that country commonly had a very long nave, continued to affect the style of building of cathedrals and those large abbey churches which were also to become cathedrals at the dissolution of the monasteries in the 16th century. Despite the fact that English cathedrals were built or rebuilt in many stages, substantial areas of Norman building can be seen in many of them, particularly in the nave arcades. In the case of Winchester Cathedral, the Gothic arches were literally carved out of the existent Norman piers. Other cathedrals have sections of their building which are clearly an intermediate stage between Norman and Gothic, such as the western towers of Ely Cathedral and part of the nave at Worcester Cathedral. The first truly Gothic building in England is the long eastern end of Canterbury Cathedral commenced in 1175. In Italy, although many churches such as Florence Cathedral and Santa Maria Novella were built in the Gothic style, or utilising the pointed arch and window tracery, Romanesque features derived from the Roman architectural heritage, such as sturdy columns with capitals of a modified Corinthian form, continued to be used. The pointed vault was utilised where convenient, but it is commonly interspersed with semicircular arches and vaults wherever they conveniently fit. The facades of Gothic churches in Italy are not always easily distinguishable from the Romanesque. Germany was not quick to adopt the Gothic style, and when it did so in the 1230s, the buildings were often modelled very directly upon French cathedrals, as Cologne Cathedral was modelled on Amiens. The smaller churches and abbeys continued to be constructed in a more provincial Romanesque manner, the date only being registered by the pointed window openings. The facade of Laon Cathedral, 1225, a Gothic cathedral, maintains rounded arches and arcading in the Romanesque manner. Ely Cathedral, England, the central western tower and framing smaller towers all had transitional features, 1180s. The tower to the left fell. Gothic porch, 1250s; lantern, 1390s. The facade of the Cathedral of Genoa has both round and pointed arches, and paired windows, a continuing Romanesque feature of Italian Gothic architecture. Romanesque castles, houses and other buildings The Romanesque period was a time of great development in the design and construction of defensive architecture. After churches and the monastic buildings with which they are often associated, castles are the most numerous type of building of the period. While most are in ruins through the action of war and politics, others, like William the Conqueror's White Tower within the Tower of London have remained almost intact. In some regions, particularly Germany, large palaces were built for rulers and bishops. Local lords built great halls in the countryside, while rich merchants built grand town houses. In Italy, city councils constructed town halls, while wealthy cities of Northern Europe protected their trading interests with warehouses and commercial premises. All over Europe, dwellers of the town and country built houses to live in, some of which, sturdily constructed in stone, have remained to this day with sufficient of their form and details intact to give a picture of the style of domestic architecture that was in fashion at the time. Examples of all these types of buildings can be found scattered across Europe, sometimes as isolated survivals like the two merchants' houses on opposite sides of Steep Hill in Lincoln, England, and sometimes giving form to a whole medieval city like San Gimignano in Tuscany, Italy. These buildings are the subject of a separate article. During the 19th century, when Gothic Revival architecture was fashionable, buildings were occasionally designed in the Romanesque style. There are a number of Romanesque Revival churches, dating from as early as the 1830s and continuing into the 20th century where the massive and "brutal" quality of the Romanesque style was appreciated and designed in brick. The Natural History Museum, London, designed by Alfred Waterhouse, 1879, on the other hand, is a Romanesque revival building that makes full use of the decorative potential of Romanesque arcading and architectural sculpture. The Romanesque appearance has been achieved while freely adapting an overall style to suit the function of the building. The columns of the foyer, for example, give an impression of incised geometric design similar to those of Durham Cathedral. However, the sources of the incised patterns are the trunks of palms, cycads and tropical tree ferns. The animal motifs, of which there are many, include rare and exotic species. The type of modern buildings for which the Romanesque style was most frequently adapted was the warehouse, where a lack of large windows and an appearance of great strength and stability were desirable features. These buildings, generally of brick, frequently have flattened buttresses rising to wide arches at the upper levels after the manner of some Italian Romanesque facades. This style was adapted to suit commercial buildings by opening the spaces between the arches into large windows, the brick walls becoming a shell to a building that was essentially of modern steel-frame construction, the architect Henry Hobson Richardson giving his name to the style, Richardsonian Romanesque. Good examples of the style are Marshall Field's Wholesale Store, Chicago, by H.H. Richardson, 1885, and the Chadwick Lead Works in Boston, USA, by William Preston, 1887. The style also lent itself to the building of cloth mills, steelworks and powerstations. The 19th-century reconstruction of the westwerk of the Romanesque Speyer Cathedral. see above - The traceried window to the left of the building indicates that the steeply gabled vestry dates from the Gothic period. - Gerville (1818): Je vous ai quelquefois parlé d'architecture romane. C’est un mot de ma façon qui me paraît heureusement inventé pour remplacer les mots insignifiants de saxone et de normande. Tout le monde convient que cette architecture, lourde et grossière, est l'opus romanum dénaturé ou successivement dégradé par nos rudes ancêtres. Alors aussi, de la langue latine, également estropiée, se faisait cette langue romane dont l'origine et la dégradation ont tant d'analogie avec l'origine et les progrès de l'architecture. Dites-moi donc, je vous prie, que mon nom romane est heureusement trouvé. English: I have sometimes spoken to you about Romanesque architecture. It is a word of my own which I invented (I think successfully) to replace the insignificant words of Saxon and Norman. Everyone agrees that this architecture, heavy and rough, is the opus romanum successively denatured or degraded by our rude ancestors. So too, out of the crippled Latin language, was made this Romance language whose origin and degradation have so much analogy with the origin and progress of architecture. Tell me, please, that my name Roman (esque) was invented with success. - de Caumont (1824): Le nom romane que nous donnons à cette architecture, qui ne doit avoir qu'un puisqu'elle est partout la même sauf de légères differences de localité, a d'ailleurs le mérite d'en indiquer l'origine et il n'est pas nouveau puisqu'on s'en sert déjà pour désigner la langue du même temps La langue romane est la langue latine dégénérée. L'architecture romane est l'architecture romaine abâtardie. ( English: The name Roman (esque) we give to this architecture, which should be universal as it is the same everywhere with slight local differences, also has the merit of indicating its origin and is not new since it is used already to describe the language of the same period. Romance language is degenerated Latin language. Romanesque architecture is debased Roman architecture) |Wikimedia Commons has media related to Romanesque architecture.| - List of Romanesque buildings - List of regional characteristics of Romanesque churches - Romanesque secular and domestic architecture - Portuguese Romanesque architecture - Romanesque art - Romanesque sculpture - Spanish Romanesque - Romanesque Revival Architecture in the United Kingdom - Bannister Fletcher, A History of Architecture on the Comparative Method. - Gidon 1934, p. 285-286 - Gidon, Ferdinand (1934). "L’invention de l’expression architecture romane par Gerville (1818) d’après quelques lettres de Gerville à Le Prévost". Bulletin de la Société des antiquaires de Normandie (in French) 42: 268–288. - de Caumont, Arcisse (8 May 1824). "Essai sur l'architecture religieuse du moyen-âge, particulièrement en Normandie". Mémoires de la Société des antiquaires de Normandie (in French) (Mancel): 535–677. Retrieved 2012-06-24. - Williams, Elizabeth (1 January 1985). "The perception of romanesque art in the romantic period: archaeological attitudes in france in the 1820s and1830s". Forum for Modern Language Studies XXI (4): 303–321. doi:10.1093/fmls/XXI.4.303. - Jean Hubert, Romanesque Art. - Date from Hartmann-Virnich, as below - de Caumont 1824, p. 550 - Gunn, William (1819). An inquiry into the origin and influence of Gothic architecture. R. and A. Taylor. p. 6. Retrieved 2012-07-06. - Andreas Hartmann-Virnich: Was ist Romanik, Darmstadt 2004, p. 28-30 - Rolf Toman, Romanesque: Architecture, Sculpture, Painting - Helen Gardner, Art through the Ages. - George Holmes, ed. The Oxford History of Medieval Europe. - Rolf Toman, pp. 114-117 - Copplestone, pp.188-89 - Rolf Toman, pp. 70-73 - Rolf Toman, pp. 18, 177, 188 - "In the years that followed the year 1000, we witnessed the rebuilding of churches all over the universe, but especially in Italy and Gaul." Chronicle of Raoul Glaber, quoted by Jean Hubert, Romanesque Art. - famous for the ancient Roman "Mouth of Truth" set into the wall of its narthex - famous for the 15th-century Ghiberti Doors - traditionally the marriage place of Romeo and Juliet - John Harvey, English Cathedrals - Alec Clifton-Taylor, The Cathedrals of England - Rolf Toman, Romanesque. - "Architecture". National Tourism Organisation of Serbia. Retrieved 2007-09-28. - Rene Hyughe, Larousse Encyclopedia of Byzantine and Medieval Art - This technique was also used in the Classical world, notably at the Parthenon. - Nikolaus Pevsner, An Outline of European Architecture - Banister Fletcher, p.307 - Stephenson, Hammond & Davi 2005, p. 172. - Jones, Murray & Murray 2013, p. 512. - Porter 1928, p. 48. - Kimball, F., & Edgell, G. H. (1918). A History of Architecture. New York. Harper & Brothers. 621 pages (page 252). - With the exception of the Plan of St. Gall, which is from an ancient manuscript (and probably does not reflect an actual construction), they are all hypothetical reconstructions of groundplans as they existed in the 12th or 13th centuries. The Abbey Church of St. Gall has been replaced by a Baroque Church. Speyer has had its west front rebuilt twice, Ely Cathedral has lost the eastern arm, being replaced in the Gothic style, the central tower being replaced with the unique octagon and the northwest tower, never rebuilt. It has also gained a west porch. Santiago has had some substantial changes including a Baroque west front. - Crossley, Frederick H. (1962). The English Abbey. - Banister Fletcher p. 309 - "Romànic de la Vall de Camprodon". Elripolles.com. 2010-03-09. Retrieved 2011-06-11. - Alec Clifton-Taylor says "With the Cathedral of Durham we reach the incomparable masterpiece of Romanesque architecture not only in England but anywhere." - See details at Cologne Cathedral. - Howe, Jeffery. "Romanesque Architecture (slides)". A digital archive of architecture. Boston College. Retrieved 2007-09-28. - James Hall, A History of Ideas and Images in Italian Art, p154, 1983, John Murray, London, ISBN 0-7195-3971-4 - George Seddon in Lee, Seddon and Stephens, Stained Glass - Wim Swaan, Gothic Cathedrals - Conant, Kenneth J., Carolingian and Romanesque Architecture: 800 to 1200 (4th, illustrated, reprint ed.). Yale University Press. 1993. ISBN 978-0-300-05298-5. - V.I. Atroshenko and Judith Collins, The Origins of the Romanesque, Lund Humphries, London, 1985, ISBN 0-85331-487-X - Rolf Toman, Romanesque: Architecture, Sculpture, Painting, Könemann, (1997), ISBN 3-89508-447-6 - Banister Fletcher, A History of Architecture on the Comparative method (2001). Elsevier Science & Technology. ISBN 0-7506-2267-9. - Alfred Clapham, Romanesque Architecture in England British Council (1950) - Helen Gardner; Fred S. Kleiner, Christin J. Mamiya, Gardner's Art through the Ages. Thomson Wadsworth, (2004) ISBN 0-15-505090-7. - George Holmes, editor, The Oxford Illustrated History of Medieval Europe, Oxford University Press, (1992) ISBN 0-19-820073-0 - René Huyghe, Larousse Encyclopedia of Byzantine and Medieval Art, Paul Hamlyn, (1958) - François Ischer, Building the Great Cathedrals. Harry N. Abrams, (1998). ISBN 0-8109-4017-5. - Jones, Tom Devonshire; Murray, Linda; Murray, Peter, eds. (2013). The Oxford Dictionary of Christian Art and Architecture (illustrated ed.). Oxford University Press. ISBN 978-0-199-68027-6. - Nikolaus Pevsner, An Outline of European Architecture. Pelican Books (1964) - Porter, Arthur Kingsley (1928). Spanish Romanesque Sculpture, Volume 1 (illustrated ed.). Hacker Art Books. - John Beckwith, Early Medieval Art, Thames and Hudson, (1964) - Peter Kidson, The Medieval World, Paul Hamlyn, (1967) - T. Francis Bumpus,, The Cathedrals and Churches of Belgium, T. Werner Laurie. (1928) - Alec Clifton-Taylor, The Cathedrals of England, Thames and Hudson (1967) - John Harvey, English Cathedrals, Batsford (1961). - Stephenson, Davis; Hammond, Victoria; Davi, Keith F. (2005). Visions of Heaven: the Dome in European Architecture (illustrated ed.). Princeton Architectural Press. p. 174. ISBN 978-1-56898-549-7. - Trewin Copplestone, World Architecture, and Illustrated History, Paul Hamlyn, (1963) - Tadhg O'Keefe, Archeology and the Pan-European Romanesque , Duckworth Publishers, (2007), ISBN 0715634348 |Look up Romanesque in Wiktionary, the free dictionary.| - Corpus of Romanesque Sculpture in Britain and Ireland - Overview of French Romanesque art - French Romanesque art through 300 places (Italian) (French) (Spanish) (English) - Romanesque Churches in Southern Burgundy - Spanish and Zamora´s Romanesque art, easy navigation (Spanish) - Spanish Romanesque art (Spanish) - Círculo Románico - Visigothic, Mozarabic and Romanesque art in Europe - Romanesque Churches in Portugal - The Nine Romanesque Churches of the Vall de Boi - Pyrenees (English) - Satan in the Groin - exhibitionist carvings on medieval churches - An illustrated article by Peter Hubert on the cusped arch - Corrèze Illustrated history (French) - The Encyclopedia of Romanesque Art in Spain: a work in progress (Spanish) - Saint-Trophime Digital Media Archive (creative commons-licensed HD documentation) on the Romanesque Church of St. Trophime, using data from a World Monuments Fund/CyArk research partnership - Cerisy-la-Forêt abbey, a masterwork of French Norman architecture
https://en.wikipedia.org/wiki/Romanesque_architecture
4.09375
Calculating the inverse of a linear function is easy: just make x the subject of the equation, and replace y with x in the resulting expression. Finding the inverse of a quadratic function is considerably trickier, not least because Quadratic functions are not, unless limited by a suitable domain, one-one functions. 1Make y or f(x) the subject of the formula if it isn't already. During your algebraic manipulation, make sure that you do not change the function in any way and perform the same operations to both "sides" of the equation. 2Rearrange the function so that it is in the form y=a(x-h)2+k. This is not only essential for you to find the inverse of the function, but also for you to determine whether the function even has an inverse. You can do this by two methods: - By completing the square - "Take common" from the whole equation the value of a (the coefficient of x2). Do this by writing the value of a, starting a bracket, and writing the whole equation, then dividing each term by the value of a, as shown in the diagram on the right. Leave the left hand side of the equation untouched, as there has been no net change to the right hand side. - Complete the square. The coefficient of x is (b/a). Halve it, to give (b/2a), and square it, to give (b/2a)2. Add and subtract it from the equation. This will have no net effect on the equation. If you look closely, you will see that the first three terms inside the bracket are in the form a2+2ab+b2, where a is x, and b is (b/2a). Of course these two values will be numerical, rather than algebraic for a real equation. This is a completed square. - Because the first three terms are now a perfect square, you can write them in the form (a-b)2 or (a+b)2. The sign between the two terms will be the same as the sign of the coefficient of x in the equation. - Take the term which is outside the perfect square out of the square bracket. This brings the equation into the form y=a(x-h)2+k, as intended. - By comparing coefficients - Form an identity in x. On the left, put the function as it is expressed in terms of x, and on the right put the function in the form that you want it to be, in this case a(x-h)2+k. This will enable you to find out the values of a, h, and k that are true for all values of x. - Open and expand the bracket on the right hand side of the identity. We shall not be touching the left hand side of the equation, and may omit it from our working. Note that all working on the right hand side is algebraic as shown and not numerical. - Identify the coefficients of each power of x. Then group them and place them in brackets, as shown on the right. - Compare the coefficients of each power of x. The coefficient of x2 on the right hand side must equal that on the left hand side. This gives the value of a. The coefficient of x on the right hand side also must equal that on the left hand side. This leads to the formation of an equation in a and h, which can be solved by substituting the value of a, which has already been found. The coefficient of x0, or 1, on the left hand side must also equal that on the right hand side. Comparing them yields an equation that will help us find the value of k. - Using the values of a,h, and k found above, we can write the equation in the desired form. - By completing the square 3Ensure that the value of h is either on the boundary of the domain, or outside it. The value of h gives the x-coordinate of the turning point of the equation. A turning point within the domain would mean that the function is not one-one, and hence does not have an inverse. Note that the equation is a(x-h)2+k. Thus if there is (x+3) inside the bracket, the value of h is negative 3. 4Make (x-h)2 the subject of your formula. Do this by subtracting the value of k from both sides of the equation, and then dividing both sides of the equation by a. By now you will have numerical values for a,h, and k, so use those, not the symbols. 5Square-Root both sides of the equation. This will remove the power of two from (x-h). Do not forget to put the "+/-" sign on the other side of the equation. 6Decide between the + and the - sign, as you can not have both (having both would make it a one to many "function", which would make it invalid as the same). For this, look at the domain. If the domain lies to the left of the stationary point i.e. x < a certain value, use the - sign. If the domain lies to the right of the stationary point i.e. x > a certain value, use the + sign. Then, make x the subject of the formula. 7Replace y with x, and x with f-1(x), and congratulate yourself on having successfully found the inverse of a quadratic function. Questions and Answers Give us 3 minutes of knowledge! - Check your inverse by calculating the value of f(x) for a certain value of x, and then substituting that value of f(x) in the inverse to see if it returns the original value of x. For example, if the function of 3 [f(3)] is 4, then substituting 4 in the inverse should return 3. - If it is not too much trouble you can also check the inverse by inspecting its graph. It should look like the original function reflected across the line y=x. In other languages: Español: encontrar la inversa de una función cuadrática, Italiano: Trovare l'Inversa di una Funzione Quadratica, Русский: найти функцию, обратную квадратичной функции, Português: Encontrar o Inverso de uma Função Quadrática Thanks to all authors for creating a page that has been read 182,769 times.
http://www.wikihow.com/Find-the-Inverse-of-a-Quadratic-Function
4
Gardening Level 2: Let's Get Growing This activity book is Level 2 (B) of the 4-H Gardening Curriculum series, written by university experts. This level focuses on the following gardening skills: using transplants in a garden, developing a planting calendar to start seeds indoors, understanding plant responses, growing plants from plant parts, making a worm box, making compost, judging vegetables, growing vegetables for cash. Target Age: Grades 7-8. Unit 1: Let's Plan! Unit 2: Dig In (Planting) Unit 3: While You Wait Unit 4: Watch Out! (Garden Care) Unit 5: Now What? (Harvesting and Storage) Unit 6: Imagine That! (Careers) You might also be interested in...
http://www.4-hmall.org/Catalog/ProductDetails.aspx?ProductId=07163
4.03125
What is osteosarcoma? Cancer starts when cells in the body begin to grow out of control. Cells in nearly any part of the body can become cancer, and can spread to other areas of the body. To learn more about how cancers start and spread, see What Is Cancer? Osteosarcoma (also called osteogenic sarcoma) is a type of cancer that starts in the bones. To understand osteosarcoma, it helps to know about bones and what they do. About normal bones Many people think of bones as just being part of the skeleton, like the steel girders that support a building. But bones actually do a number of different things. - Some bones help support and protect our vital organs. Examples include the skull bones, breast bone (sternum), and ribs. These types of bones are often referred to as flat bones. - Other bones, such as those in the arms and legs, make a framework for our muscles that helps us move. These are called long bones. - Bones also make new blood cells. This is done in the soft, inner part of some bones called the bone marrow, which contains blood-forming cells. New red blood cells, white blood cells, and platelets are made in bone marrow. - Bones also provide the body with a place to store minerals such as calcium. Because bones are very hard and don’t change shape − at least once we reach adulthood − we might not think of bones as being alive, but they are. Like all other tissues of the body, bones have many kinds of living cells. Two main types of cells in our bones help them stay strong and keep their shape. - Osteoblasts help build up bones by forming the bone matrix (the connective tissue and minerals that give bone its strength). - Osteoclasts break down bone matrix to prevent too much of it from building up, and they help bones keep their proper shape. By depositing or removing minerals from the bones, osteoblasts and osteoclasts also help control the amount of these minerals in the blood. Osteosarcoma is the most common type of cancer that develops in bone. Like the osteoblasts in normal bone, the cells that form this cancer make bone matrix. But the bone matrix of an osteosarcoma is not as strong as that of normal bones. Most osteosarcomas occur in children and young adults. Teens are the most commonly affected age group, but osteosarcoma can occur at any age. In children and young adults, osteosarcoma usually develops in areas where the bone is growing quickly, such as near the ends of the long bones. Most tumors develop in the bones around the knee, either in the distal femur (the lower part of the thigh bone) or the proximal tibia (the upper part of the shinbone). The proximal humerus (the part of the upper arm bone close to the shoulder) is the next most common site. However, osteosarcoma can develop in any bone, including the bones of the pelvis (hips), shoulder, and jaw. This is especially true in older adults. Subtypes of osteosarcoma Several subtypes of osteosarcoma can be identified by how they look on x-rays and under the microscope. Some of these subtypes have a better prognosis (outlook) than others. Based on how they look under the microscope, osteosarcomas can be classified as high grade, intermediate grade, or low grade. The grade of the tumor tells doctors how likely it is that the cancer will grow and spread to other parts of the body. High-grade osteosarcomas: These are the fastest growing types of osteosarcoma. When seen under a microscope, they do not look like normal bone and have many cells in the process of dividing into new cells. Most osteosarcomas that occur in children and teens are high grade. There are many types of high-grade osteosarcomas (although the first 3 are the most common). - Small cell - High-grade surface (juxtacortical high grade) Other high-grade osteosarcomas include: - Pagetoid: a tumor that develops in someone with Paget disease of the bone - Extra-skeletal: a tumor that starts in a part of the body other than a bone - Post-radiation: a tumor that starts in a bone that had once received radiation therapy Intermediate-grade osteosarcomas: These uncommon tumors fall in between high-grade and low-grade osteosarcomas. (They are usually treated as if they are low-grade osteosarcomas.) - Periosteal (juxtacortical intermediate grade) Low-grade osteosarcomas: These are the slowest growing osteosarcomas. The tumors look more like normal bone and have few dividing cells when seen under a microscope. - Parosteal (juxtacortical low grade) - Intramedullary or intraosseous well differentiated (low-grade central) The grade of the tumor plays a role in determining its stage and the type of treatment used. For more on staging, see the section “How is osteosarcoma staged?” Other types of bone tumors Several other types of tumors can start in the bones. Malignant (cancerous) bone tumors Ewing tumors are the second most common bone cancer in children. They are described in our document Ewing Family of Tumors. Most other types of bone cancers are usually found in adults and are rare in children. These include: - Chondrosarcoma (cancer that develops from cartilage) - Malignant fibrous histiocytoma - Malignant giant cell tumor of bone For more information on these cancers, see our document Bone Cancer. Many types of cancer that start in other organs of the body can spread to the bones. These are sometimes referred to as metastatic bone cancers, but they are not true bone cancers. For example, prostate cancer that spreads to the bones is still prostate cancer and is treated like prostate cancer. For more information, see the document Bone Metastasis. Benign (non-cancerous) bone tumors Not all bone tumors are cancer. Benign bone tumors do not spread to other parts of the body. They are usually not life threatening and can often be cured by surgery. There are many types of benign bone tumors. - Osteomas are benign tumors formed by bone cells. - Chondromas are benign tumors formed by cartilage cells. - Osteochondromas are benign tumors with both bone and cartilage cells. Other benign bone tumors include eosinophilic granuloma of bone, non-ossifying fibroma, enchondroma, xanthoma, benign giant cell tumor of bone, and lymphangioma. The rest of this document covers only osteosarcoma. Last Medical Review: 04/18/2014 Last Revised: 01/27/2016
http://www.cancer.org/cancer/osteosarcoma/detailedguide/osteosarcoma-what-is-osteosarcoma
4.15625
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. The act of bending a joint, especially a bone joint. The counteraction of extension. Not only are a variety of movements possible with synovial joints, but in order to maintain flexibility, these joints need to be moved daily. Failure to maintain flexibility of joints makes movement more difficult and increases the probability of falls and injuries. A synovial joint, also known as a diarthrosis, is the most common and most movable type of joint in the body of a mammal. As with most other joints, synovial joints achieve movement at the point of contact of the articulating bones. Structural and functional differences distinguish synovial joints from cartilaginous joints (synchondroses and symphyses) and fibrous joints (sutures, gomphoses, and syndesmoses). The main structural differences between synovial and fibrous joints are the existence of capsules surrounding the articulating surfaces of a synovial joint and the presence of lubricating synovial fluid within those capsules (synovial cavities). Several movements may be performed by synovial joints. Abduction is the movement away from the mid-line of the body. Adduction is the movement toward the middle line of the body . Extension is the straightening of limbs (increase in angle) at a joint. Flexion is bending the limbs (reduction of angle) at a joint . Rotation is a circular movement around a fixed point. There are six types of synovial joints. Some are relatively immobile, but are more stable. Others have multiple degrees of freedom, but at the expense of greater risk of injury. The six types of joints include: Gliding joints, which only allow sliding movement Hinge joints, which allow flexion and extension in one plane Pivot joints, which allow bone rotation about another Condyloid joints, which allow flexion, extension, abduction, and adduction movements Saddle joints, which permit the same movement as condyloid joints (and condylar joints and saddle joints combine to form compound joints) Ball and socket joints, which allow all movements except gliding synovial joints, unlike cartilaginous and fibrous joints, achieve movement at bony points of contact, synovial joints are the most moveable of the body joints, synovial joints have lubricating capsules surrounding the articular surfaces of the joints, or synovial joints are also known as diarthroses
https://www.boundless.com/physiology/textbooks/boundless-anatomy-and-physiology-textbook/joints-8/synovial-joints-92/synovial-joint-movements-520-1140/
4.15625
A concretion is a hard, compact mass of matter formed by the precipitation of mineral cement within the spaces between particles, and is found in sedimentary rock or soil. Concretions are often ovoid or spherical in shape, although irregular shapes also occur. The word 'concretion' is derived from the Latin con meaning 'together' and crescere meaning 'to grow'. Concretions form within layers of sedimentary strata that have already been deposited. They usually form early in the burial history of the sediment, before the rest of the sediment is hardened into rock. This concretionary cement often makes the concretion harder and more resistant to weathering than the host stratum. There is an important distinction to draw between concretions and nodules. Concretions are formed from mineral precipitation around some kind of nucleus while a nodule is a replacement body. Descriptions dating from the 18th century attest to the fact that concretions have long been regarded as geological curiosities. Because of the variety of unusual shapes, sizes and compositions, concretions have been interpreted to be dinosaur eggs, animal and plant fossils (called pseudofossils), extraterrestrial debris or human artifacts. - 1 Origins - 2 Appearance - 3 Composition - 4 Occurrence - 5 Types of concretion - 6 Gallery - 7 See also - 8 Citations - 9 References - 10 External links Detailed studies (i.e., Boles et al., 1985; Thyne and Boles, 1989; Scotchman, 1991; Mozley and Burns, 1993; McBride et al., 2003; Chan et al., 2005; Mozley and Davis, 2005) published in peer-reviewed journals have demonstrated that concretions form subsequent to burial during diagenesis. They quite often form by the precipitation of a considerable amount of cementing material around a nucleus, often organic, such as a leaf, tooth, piece of shell or fossil. For this reason, fossil collectors commonly break open concretions in their search for fossil animal and plant specimens. Some of the most unusual concretion nuclei, as documented by Al-Agha et al. (1995), are World War II military shells, bombs, and shrapnel, which are found inside siderite concretions found in an English coastal salt marsh. Depending on the environmental conditions present at the time of their formation, concretions can be created by either concentric or pervasive growth (Mozley, 1996; Raiswell and Fisher, 2000). In concentric growth, the concretion grows as successive layers of mineral accrete to its surface. This process results in the radius of the concretion growing with time. In case of pervasive growth, cementation of the host sediments, by infilling of its pore space by precipitated minerals, occurs simultaneously throughout the volume of the area, which in time becomes a concretion. Concretions vary in shape, hardness and size, ranging from objects that require a magnifying lens to be clearly visible to huge bodies three meters in diameter and weighing several thousand pounds. The giant, red concretions occurring in Theodore Roosevelt National Park, in North Dakota, are almost 3 m (9.8 ft) in diameter. Spheroidal concretions, as large as 9 m (30 ft) in diameter, have been found eroding out of the Qasr El Sagha Formation within the Faiyum depression of Egypt. Concretions are usually similar in color to the rock in which they are found. Concretions occur in a wide variety of shapes, including spheres, disks, tubes, and grape-like or soap bubble-like aggregates. They are commonly composed of a carbonate mineral such as calcite; an amorphous or microcrystalline form of silica such as chert, flint, or jasper; or an iron oxide or hydroxide such as goethite and hematite. They can also be composed of other minerals that include dolomite, ankerite, siderite, pyrite, marcasite, barite and gypsum. Although concretions often consist of a single dominant mineral, other minerals can be present depending on the environmental conditions which created them. For example, carbonate concretions, which form in response to the reduction of sulfates by bacteria, often contain minor percentages of pyrite. Other concretions, which formed as a result of microbial sulfate reduction, consist of a mixture of calcite, barite, and pyrite. Concretions are found in a variety of rocks, but are particularly common in shales, siltstones, and sandstones. They often outwardly resemble fossils or rocks that look as if they do not belong to the stratum in which they were found. Occasionally, concretions contain a fossil, either as its nucleus or as a component that was incorporated during its growth but concretions are not fossils themselves. They appear in nodular patches, concentrated along bedding planes, protruding from weathered cliffsides, randomly distributed over mudhills or perched on soft pedestals. Types of concretion Concretions vary considerably in their compositions, shapes, sizes and modes of origin. Septarian concretions or septarian nodules, are concretions containing angular cavities or cracks, called "septaria". The word comes from the Latin word septum; "partition", and refers to the cracks/separations in this kind of rock. There is an incorrect explanation that it comes from the Latin word for "seven", septem, referring to the number of cracks that commonly occur. Cracks are highly variable in shape and volume, as well as the degree of shrinkage they indicate. Although it has commonly been assumed that concretions grew incrementally from the inside outwards, the fact that radially oriented cracks taper towards the margins of septarian concretions is taken as evidence that in these cases the periphery was stiffer while the inside was softer, presumably due to a gradient in the amount of cement precipitated. The process that created the septaria, which characterize septarian concretions, remains a mystery. A number of mechanisms, e.g. the dehydration of clay-rich, gel-rich, or organic-rich cores; shrinkage of the concretion's center; expansion of gases produced by the decay of organic matter; brittle fracturing or shrinkage of the concretion interior by either earthquakes or compaction; and others, have been proposed for the formation of septaria (Pratt 2001). At this time, it is uncertain, which, if any, of these and other proposed mechanisms is responsible for the formation of septaria in septarian concretions (McBride et al. 2003). Septaria usually contain crystals precipitated from circulating solutions, usually of calcite. Siderite or pyrite coatings are also occasionally observed on the wall of the cavities present in the septaria, giving rise respectively to a panoply of bright reddish and golden colors. Some septaria may also contain small calcite stalactites and well-shaped millimetric pyrite single crystals. A spectacular example of septarian concretions, which are as much as 3 meters (9.8 feet) in diameter, are the Moeraki Boulders. These concretions are found eroding out of Paleocene mudstone of the Moeraki Formation exposed along the coast near Moeraki, South Island, New Zealand. They are composed of calcite-cemented mud with septarian veins of calcite and rare late-stage quartz and ferrous dolomite (Boles et al. 1985, Thyne and Boles 1989). Very similar concretions, which are as much as 3 meters (9.8 feet) in diameter and called "Koutu Boulders", litter the beach between Koutu and Kauwhare points along the south shore of the Hokianga Harbour of Hokianga, North Island, New Zealand. The much smaller septarian concretions found in the Kimmeridge Clay exposed in cliffs along the Wessex Coast of England are more typical examples of septarian concretions (Scotchman 1991). Cannonball concretions are large spherical concretions, which resemble cannonballs. These are found along the Cannonball River within Morton and Sioux Counties, North Dakota, and can reach 3 m (9.8 ft) in diameter. They were created by early cementation of sand and silt by calcite. Similar cannonball concretions, which are as much as 4 to 6 m (13 to 20 ft) in diameter, are found associated with sandstone outcrops of the Frontier Formation in northeast Utah and central Wyoming. They formed by the early cementation of sand by calcite (McBride et al. 2003). Somewhat weathered and eroded giant cannonball concretions, as large as 6 meters (20 feet) in diameter, occur in abundance at "Rock City" in Ottawa County, Kansas. Large and spherical boulders are also found along Koekohe beach near Moeraki on the east coast of the South Island of New Zealand. The Moeraki Boulders and Koutu Boulders of New Zealand are examples of septarian concretions, which are also cannonball concretions. Large spherical rocks, which are found on the shore of Lake Huron near Kettle Point, Ontario, and locally known as "kettles", are typical cannonball concretions. Cannonball concretions have also been reported from Van Mijenfjorden, Spitsbergen; near Haines Junction, Yukon Territory, Canada; Jameson Land, East Greenland; near Mecevici, Ozimici, and Zavidovici in Bosnia-Herzegovina; in Alaska in the Kenai Peninsula Captain Cook State Park on north of Cook Inlet beach. Reports of cannonball concretions have also come from Bandeng and Zhanlong hills near Gongxi Town, Hunan Province, China. Hiatus concretions are distinguished by their stratigraphic history of exhumation, exposure and reburial. They are found where submarine erosion has concentrated early diagenetic concretions as lag surfaces by washing away surrounding fine-grained sediments (Zaton 2010). Their significance for stratigraphy, sedimentology and paleontology was first noted by Voigt (1968) who referred to them as Hiatus-Konkretionen. "Hiatus" refers to the break in sedimentation that allowed this erosion and exposure. They are found throughout the fossil record but are most common during periods in which calcite sea conditions prevailed, such as the Ordovician, Jurassic and Cretaceous (Zaton 2010). Most are formed from the cemented infillings of burrow systems in siliciclastic or carbonate sediments. A distinctive feature of hiatus concretions separating them from other types is that they were often encrusted by marine organisms including bryozoans, echinoderms and tube worms in the Paleozoic (e.g., Wilson 1985) and bryozoans, oysters and tube worms in the Mesozoic and Cenozoic (e.g., Taylor and Wilson 2001). Hiatus concretions are also often significantly bored by worms and bivalves (Taylor and Wilson 2001). Elongate concretions form parallel to sedimentary strata and have been studied extensively due to the inferred influence of phreatic (saturated) zone groundwater flow direction on the orientation of the axis of elongation (e.g., Johnson, 1989; McBride et al., 1994; Mozley and Goodwin, 1995; Mozley and Davis, 2005). In addition to providing information about the orientation of past fluid flow in the host rock, elongate concretions can provide insight into local permeability trends (i.e., permeability correlation structure; Mozley and Davis, 1996), variation in groundwater velocity (Davis, 1999), and the types of geological features that influence flow. Elongate concretions are well known in the Kimmeridge Clay formation of northwest Europe. In outcrops, where they have acquired the name "doggers", they are typically only a few metres across, but in the subsurface they can be seen to penetrate up to tens of metres of along-hole dimension. Unlike limestone beds, however, it is impossible to consistently correlate them between even closely spaced wells. Moqui Marbles, also called Moqui balls, and "Moki marbles", are iron oxide concretions which can be found eroding in great abundance out of outcrops of the Navajo Sandstone within south-central and southeastern Utah. These concretions range in shape from spheres to discs, buttons, spiked balls, cylindrical forms, and other odd shapes. They range from pea-size to baseball-size. They were created by the precipitation of iron, which was dissolved in groundwater. They are further described by (Chan and Parry 2002, Chan et al. 2005).(Loope et al. 2010,2011) Kansas pop rocks Kansas pop rocks are concretions of either iron sulfide, i.e. pyrite and marcasite, or in some cases jarosite, which are found in outcrops of the Smoky Hill Chalk Member of the Niobrara Formation within Gove County, Kansas. They are typically associated with thin layers of altered volcanic ash, called bentonite, that occur within the chalk comprising the Smoky Hill Chalk Member. A few of these concretions enclose, at least in part, large flattened valves of inoceramid bivalves. These concretions range in size from a few millimeters to as much as 0.7 m (2.3 ft) in length and 12 cm (0.39 ft) in thickness. Most of these concretions are oblate spheroids shape. Other "pop rocks" are small polycuboidal pyrite concretions, which are as much as 7 cm (0.23 ft) in diameter (Hattin 1982). These concretions are called "pop rocks" because they explode if thrown in a fire. Also, when they are either cut or hammered, they produce sparks and a burning sulfur smell. Contrary to what has been published on the Internet, none of the iron sulfide concretions, which are found in the Smoky Hill Chalk Member were created by either the replacement of fossils or by metamorphic processes. In fact, metamorphic rocks are completely absent from the Smoky Hill Chalk Member (Hattin 1982). Instead, all of these the iron sulfide concretions were created by the precipitation of iron sulfides within anoxic marine calcareous ooze after it had accumulated and before it had lithified into chalk. Iron sulfide concretions, such as the Kansas Pop rocks, consisting of either pyrite and marcasite, are nonmagnetic (Hobbs and Hafner 1999). On the other hand, iron sulfide concretions, which either are composed of or contain either pyrrhotite or smythite, will be magnetic to varying degrees (Hoffmann, 1993). Prolonged heating of either a pyrite or marcasite concretion will convert portions of either mineral into pyrrhotite causing the concretion to become slightly magnetic. Calcium carbonate disc concretions These so-called fairy stones consist of single or multiple discs, usually 6–10 cm in diameter and often with concentric grooves on their surfaces. They form in Quaternary clay as calcium carbonate migrates to some small fossil or pebble. Fairy stones are particularly common in the Harricana River valley in the Abitibi-Témiscamingue administrative region of Quebec, and in Östergötland county, Sweden. - Bowling Ball Beach - Calcrete, CaCO3 concretions in arid and semi-arid soils - Caliche (mineral), synonym of calcrete - Dinocochlea in the Natural History Museum, London - Clay dogs - Gypcrust, CaSO4 concretions in arid and semi-arid soils - Klerksdorp sphere - Martian spherules - Moeraki Boulders (New Zealand) - Mushroom Rock State Park, Kansas - Nodule (geology), a replacement body, not to be confused with a concretion - Rock City, Kansas - Speleothems, CaCO3 formations in caves - Glossary of terms in soil science (PDF). Ottawa: Agriculture Canada. 1976. p. 13. ISBN 0662015339. - "septarian". dictionary.reference.com. Retrieved March 20, 2014. - "SEPTARIAN NODULES". Archived from the original on 5 September 2013. - Dann, C., and Peat, N. (1989) Dunedin, North and South Otago. Wellington: GP Books. ISBN 0-477-01438-0 - "The Epoch Times - Mysterious Huge Stone Eggs Discovered in Hunan Province". - Al-Agha, M.R., S.D. Burley, C.D. Curtis, and J. Esson, 1995, Complex cementation textures and authigenic mineral assemblages in Recent concretions from the Lincolnshire Wash (east coast, UK) driven by Fe(0) Fe(II) oxidation: Journal of the Geological Society, London, v. 152, pp. 157–171. - Boles, J.R., C.A. Landis, and P. Dale, 1985, The Moeraki Boulders; anatomy of some septarian concretions:, Journal of Sedimentary Petrology. v. 55, n. 3, pp. 398–406. - Chan, M.A. and W.T. Parry, 2002, 'Mysteries of Sandstone Colors and Concretions in Colorado Plateau Canyon Country PDF version, 468 KB : Utah Geological Survey Public Information Series. n. 77, pp. 1–19. - Chan, M.A., B.B. Beitler, W.T. Parry, J. Ormo, and G. Komatsu, 2005. Red Rock and Red Planet Diagenesis: Comparison of Earth and Mars Concretions PDF version, 3.4 MB : GSA Today, v. 15, n. 8, pp. 4–10. - Davis, J.M., 1999, Oriented carbonate concretions in a paleoaquifer: Insights into geologic controls on fluid flow: Water Resources Research, v. 35, p. 1705-1712. - Hattin, D.E., 1982, Stratigraphy and depositional environment of the Smoky Hill Chalk Member, Niobrara Chalk (Upper Cretaceous) of the type area, western Kansas: Kansas Geological Survey Bulletin 225:1-108. - Hobbs, D., and J. Hafnaer, 1999, Magnetism and magneto-structural effects in transition-metal sulphides: Journal of Physics: Condensed Matter, v. 11, pp. 8197–8222. - Hoffmann, V., H. Stanjek, and E. Murad, 1993, Mineralogical, magnetic and mössbauer data of symthite (Fe9S11) : Studia Geophysica et Geodaetica, v. 37, pp. 366–381. - Johnson, M.R., 1989, Paleogeographic significance of oriented calcareous concretions in the Triassic Katberg Formation, South Africa: Journal of Sedimentary Petrology, v. 59, p. 1008-1010. - Loope D.B., Kettler R.M., Weber K.A., 2011, Morphologic Clues to the origin of Iron Oxide-Cemented Sphereoids, Boxworks, and Pipelike Concretions, Navajo Sandstone of South-Central Utah, U.S.A, The Journal of Geology, Vol. 119, No. 5 (September 2011), pp. 505–520 - Loope D.B., Kettler R.M., Weber K.A., 2011, Follow the water: Connecting a CO2 reservoir and bleached sandstone to iron-rich concretions in the Navajo Sandstone of south-central Utah, USA, GEOLOGY FORUM, November 2011, Geological Society of America doi:10.1130/G32550Y.1 - McBride, E.F., M.D. Picard, and R.L. Folk, 1994, Oriented concretions, Ionian Coast, Italy: evidence of groundwater flow direction: Journal of Sedimentary Research, v. 64, p. 535-540. - McBride, E.F., M.D. Picard, and K.L. Milliken, 2003, Calcite-Cemented Concretions in Cretaceous Sandstone, Wyoming and Utah, U.S.A.: Journal of Sedimentary Research. v. 73, n. 3, p. 462-483. - Mozley, P.S., 1996, The internal structure of carbonate concretions: A critical evaluation of the concentric model of concretion growth: Sedimentary Geology: v. 103, p. 85-91. - Mozley, P.S., and Goodwin, L., 1995, Patterns of cementation along a Cenozoic normal fault: A record of paleoflow orientations: Geology: v. 23, p 539-542. - Mozley, P.S., and Burns, S.J., 1993, Oxygen and carbon isotopic composition of marine carbonate concretions: an overview: Journal of Sedimentary Petrology, v. 63, p. 73-83. - Mozley, P.S., and Davis, J.M., 2005, Internal structure and mode of growth of elongate calcite concretions: Evidence for small-scale microbially induced, chemical heterogeneity in groundwater: Geological Society of America Bulletin, v. 117, 1400-1412. - Pratt, B.R., 2001, "Septarian concretions: internal cracking caused by synsedimentary earthquakes": Sedimentology, v. 48, p. 189-213. - Raiswell, R., and Q.J. Fisher, 2000, Mudrock-hosted carbonate concretions: a review of growth mechanisms and their influence on chemical and isotopic composition: Journal of Geological Society of London. v. 157, p. 239-251 - Scotchman, I.C., 1991, The geochemistry of concretions from the Kimmeridge Clay Formation of southern and eastern England: Sedimentology. v. 38, pp. 79-106. - Thyne, G.D., and J.R. Boles, 1989, Isotopic evidence for origin of the Moeraki septarian concretions, New Zealand: Journal of Sedimentary Petrology. v. 59, n. 2, pp. 272-279. - Voigt, E., 1968, Uber-Hiatus-Konkretion (dargestellt an Beispielen aus dem Lias): Geologische Rundschau. v. 58, pp. 281–296. - Wilson, M.A., 1985, Disturbance and ecologic succession in an Upper Ordovician cobble-dwelling hardground fauna: Science. v. 228, pp. 575-577. - Wilson, M.A., and Taylor, P.D., 2001, Palaeoecology of hard substrate faunas from the Cretaceous Qahlah Formation of the Oman Mountains: Palaeontology. v. 44, pp. 21-41. - Zaton, M., 2010, Hiatus concretions: Geology Today. v. 26, pp. 186–189. |Wikimedia Commons has media related to Concretion.| - Dietrich, R.V., 2002, Carbonate Concretions--A Bibliography, The Wayback Machine. and PDF file of Carbonate Concretions--A Bibliography, CMU Online Digital Object Repository, Central Michigan University, Mount Pleasant, Michigan. - Biek, B., 2002, Concretions and Nodules in North Dakota North Dakota Geological Survey, Bismarck, North Dakota. - Epoch Times Staff, 2007, Mysterious Huge Stone Eggs Discovered in Hunan Province Epoch Times International. Photographs of large cannonball concretions recently found in Hunan Province, China. - Everhart, M., 2004, A Field Guide to Fossils of the Smoky Hill ChalkPart 5: Coprolites, Pearls, Fossilized Wood and other Remains Part of the Oceans of Kansas web site. - Hansen, M.C., 1994, Ohio Shale Concretions PDF version, 270 KB Ohio Division of Geological Survey GeoFacts n. 4, pp. 1–2. - Hanson, W.D., and J.M. Howard, 2005, Spherical Boulders in North-Central Arkansas PDF version, 2.8 MB Arkansas Geological Commission Miscellaneous Publication n. 22, pp. 1–23. - Heinrich, P.V., 2007, The Giant Concretions of Rock City Kansas PDF version, 836 KB BackBender's Gazette. vol. 38, no. 8, pp. 6–12. - Hokianga Tourism Association, nd, Koutu Boulders ANY ONE FOR A GAME OF BOWLS? and Koutu Boulders, Hokianga Harbour, Northland, New Zealand High-quality pictures of cannonball concretions. - Irna, 2006, All that nature can never do, part IV : stone spheres - Irna, 2007a, Stone balls : in France too! - Irna, 2007b, Stone balls in Slovakia, Czech Republic and Poland - Katz, B., 1998, Concretions Digital West Media, Inc. - Kuban, Glen J., 2006-2008. Nevada Shoe Prints? - McCollum, A., nd, Sand Concretions from Imperial Valley, a collection of articles maintained by an American artist. - Mozley, P.S., Concretions, bombs, and groundwater, on-line version of an overview paper originally published by the New Mexico Bureau of Geology and Mineral Resources. - United States Geological Survey, nd, Cannonball concretion - University of Utah, 2004, Earth Has 'Blueberries' Like Mars 'Moqui Marbles' Formed in Groundwater in Utah's National Parks press release about iron oxide and Martian concretions
https://en.wikipedia.org/wiki/Concretion
4.25
1 Answer | Add Yours Temperature is the measure of the flow of energy, particularly of heat. When you put two things of different temperature in contact with each other, they will eventually have the same temperature (thermal equilibrium). What happens is that heat flows from the hotter object to the coller object. For example, in your first beaker containing hot water, it will eventually cool to room temperature as heat flows from the hot water and beaker to the surroundings. At this point, you may notice that the beaker is also hot (it is probably in thermal equilibrium with the hot water). The cooling rate depends on the heat capacity of the substances. The lower the heat capacity, the quicker heat is gained or lost (depending on the situation). Now, if we want to know the difference in rate of cooling of a single beaker and a lot of beaker packed closely together, we simply have to note their heat capacity. Note that heat capacity depends on the mass of the substance. Assuming that the entire system (beaker + water) is a single entity of a certain heat capacity, the single beaker will cool faster than the group of beakers. [Of course this will change and will be different if the beakers are far apart from each other]. We’ve answered 302,729 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/can-you-please-explain-difference-temperature-440839
4.09375
As hurricane Sandy made its way to the Eastern coast of the United States in October 2012, meteorologists called the storm unprecedented in terms of its potential for damage and fatalities. Few events on Earth rival the sheer power of a hurricane. Also known as tropical cyclones and typhoons, these fierce storms can churn the seas into a violent topography of 50-foot (15-meter) peaks and valleys, redefine coastlines and reduce whole cities to watery ruin. Some researchers even theorize that the dinosaurs were wiped out by prehistoric hypercanes, a kind of super-hurricane stirred to life by the heat of an asteroid strike [source: National Geographic]. Every year, the world experiences hurricane season. During this period, hundreds of storm systems spiral out from the tropical regions surrounding the equator, and between 40 and 50 of these storms intensify to hurricane levels. In the Northern Hemisphere, the season runs from June 1 to Nov. 30, while the Southern Hemisphere generally experiences hurricane activity from January to March. So 75 percent of the year, it's safe to say that someone somewhere is probably worrying about an impending hurricane. A hurricane builds energy as it moves across the ocean, sucking up warm, moist tropical air from the surface and dispensing cooler air aloft. Think of this as the storm breathing in and out. The hurricane escalates until this "breathing" is disrupted, like when the storm makes landfall. At this point, the storm quickly loses its momentum and power, but not without unleashing wind speeds as high as 185 mph (300 kph) on coastal areas. In this article, we'll explore the lifecycle and anatomy of a hurricane, as well as the methods we use to classify and track these ultimate storm systems as they hurtle across the globe. Defining a Hurricane To understand how a hurricane works, you have to understand the basic principles of atmospheric pressure. The gases that make up Earth's atmosphere are subject to the planet's gravity. In fact, the atmosphere weighs in at a combined 5.5 quadrillion tons (4.99 quadrillion metric tons). The gas molecules at the bottom, or those closest to the Earth's surface where we all live, are compressed by the weight of the air above them. The air closest to us is also the warmest, as the atmosphere is mostly heated by the land and the sea, not by the sun. To understand this principle, think of a person frying an egg on the sidewalk on a hot, sunny day. The heat absorbed by the pavement actually fries the egg, not the heat coming down from the sun. When air heats up, its molecules move farther apart, making it less dense. This air then rises to higher altitudes where air molecules are less compressed by gravity. When warm, low-pressure air rises, cool, high-pressure air seizes the opportunity to move in underneath it. This movement is called a pressure gradient force. These are some of the basic forces at work when a low-pressure center forms in the atmosphere -- a center that may turn into what people in the North Atlantic, North Pacific and Caribbean regions call a hurricane. What else is happening? Well, as we know, warm, moist air from the ocean's surface begins to rise rapidly. As it rises, its water vapor condenses to form storm clouds and droplets of rain. The condensation releases heat called latent heat of condensation. This latent heat warms the cool air, causing it to rise. This rising air is replaced by more warm, humid air from the ocean below. And the cycle continues, drawing more warm, moist air into the developing storm and moving heat from the surface to the atmosphere. This exchange of heat creates a pattern of wind that circulates around a center, like water going down a drain. But what about those signature ferocious winds? Converging winds at the surface are colliding and pushing warm, moist air upward. This rising air reinforces the air that's already ascending from the surface, so the circulation and wind speeds of the storm increase. In the meantime, strong winds blowing the same speed at higher altitudes (up to 30,000 feet or 9,000 meters) help to remove the rising hot air from the storm's center, maintaining a continual movement of warm air from the surface and keeping the storm organized. If the high-altitude winds don't blow at the same speed at all levels -- if wind shears are present -- the storm becomes disorganized and weakens. Even higher in the atmosphere (above 30,000 feet or 9,000 meters) high-pressure air over the storm's center also removes heat from the rising air, further driving the air cycle and the hurricane's growth. As high-pressure air is sucked into the low-pressure center of the storm, wind speeds increase. Then you have a hurricane to contend with. How a Hurricane Forms This content is not compatible on this device. Source: NASA Observatorium You never hear about hurricanes hitting Alaska. That's because hurricanes develop in warm, tropical regions where the water is at least 80 degrees Fahrenheit (27 degrees Celsius). The storms also require moist air and converging equatorial winds. Most Atlantic hurricanes begin off the west coast of Africa, starting as thunderstorms that move out over the warm, tropical ocean waters. A hurricane's low-pressure center of relative calm is called the eye. The area surrounding the eye is called the eye wall, where the storm's most violent winds occur. The bands of thunderstorms that circulate outward from the eye are called rain bands. These storms play a key role in the evaporation/condensation cycle that feeds the hurricane. The rotation of a hurricane is a product of the Coriolis force, a natural phenomenon that causes fluids and free-moving objects to veer to the right of their destination in the Northern Hemisphere and to the left in the Southern Hemisphere. Imagine flying a small plane directly south. While you're moving southward, the planet is rotating. If you plotted a flight from the North Pole to the equator on a map, the path will appear to curve to the right. So in the Northern Hemisphere, winds deflect to the right. In the Southern Hemisphere, they deflect to the left. This wind deflection gets storms spinning. As a result, hurricanes in the Northern Hemisphere rotate counterclockwise and clockwise in the Southern Hemisphere. The force also affects the actual path of the hurricane, bending them to the right (clockwise) in the Northern Hemisphere and to the left (counterclockwise) if you're south of the equator. If you can't remember, just move within five degrees of the equator; the Coriolis force is too weak there to help form hurricanes. Hurricanes often begin their lives as clusters of clouds and thunderstorms called tropical disturbances. These low-pressure areas feature weak pressure gradients and little or no rotation. Most of these disturbances die out, but a few persevere down the path to hurricane status. In these cases, the thunderstorms in the disturbance release latent heat, which warms areas in the disturbance. This causes the air density inside the disturbance to lower, dropping the surface pressure. Wind speeds increase as cooler air rushes underneath the rising warm air. As this wind is subject to the Coriolis force, the disturbance begins to rotate. The incoming winds bring in more moisture, which condenses to form more cloud activity and releases latent heat in the process. On the next page, we'll explore the brief, violent life of a hurricane. Lifecycle of a Hurricane Given the destruction the storm unleashes, it's easy to think of a hurricane as a kind of monster. It may not be a living organism, but it does require sustenance in the form of warm, moist air. And if a tropical disturbance continues to find enough of this "food" and to encounter optimal wind and pressure conditions, it will just keep growing. It can take anywhere from hours to days for a tropical disturbance to develop into a hurricane. But if the cycle of cyclonic activity continues and wind speeds increase, the tropical disturbance advances through three stages: - Tropical depression: wind speeds of less than 38 mph - Tropical storm: wind speeds of 39 to 73 mph - Hurricane: wind speeds greater than 74 mph Between 80 and 100 tropical storms develop each year around the world. Many of them die out before they can grow too strong, but around half of them eventually achieve hurricane status. Hurricanes vary widely in physical size. Some storms are compact, with only a few bands of wind and rain trailing behind them. Other storms are looser -- the bands of wind and rain spread out over hundreds or thousands of miles. Hurricane Floyd, which hit the eastern United States in September 1999, was felt from the Caribbean islands to New England. Once a hurricane has formed and intensified, the only remaining path for the atmospheric juggernaut is dissipation. Eventually, the storm will encounter conditions that deny it the warm, moist air it requires. When a hurricane moves onto cooler waters at a higher latitude, gradient pressure decreases, winds slow, and the entire storm is tamed, from a tropical cyclone to a weaker extratropical cyclone that peters out in days. That important supply of warm, moist air also vanishes when the hurricane makes landfall. Condensation and the release of latent heat diminishes, and the friction of an uneven landscape decreases wind speeds. This causes winds to move more directly into the eye of the storm, eliminating the large pressure difference that fuels the storm's awesome power. Hurricanes can unleash incredible damage when they hit. With enough advance warning though, cities and coastal areas can give residents the time they need to fortify the area and even evacuate. To better classify each hurricane and prepare those affected for the intensity of the storm, meteorologists rely on rating systems. Australian meteorologists use a slightly different scale to classify hurricanes. While the Australian scale of cyclone intensity also ranks storms by wind speed and damage on a scale of 1 to 5, it covers both hurricanes and tropical storms. On the next page, we'll look at the tremendous damage hurricanes can inflict when they collide with coastal areas. Over the course of millennia, hurricanes have cemented their reputation as destroyers. Many people even frame them as the embodiment of nature's power or acts of divine wrath. The word "hurricane" itself actually derives from "Hurakan," a destructive Mayan god. No matter how you choose to sum up or personify these powerful acts of nature, the damage they inflict stems from several different aspects of the storm. Hurricanes deliver massive downpours of rain. A particularly large storm can dump dozens of inches of rain in just a day or two, much of it inland. That amount of rain can create flooding, potentially devastating large areas in the path of the hurricane's fierce center. In addition, high sustained winds within the storm can cause widespread structural damage to both man-made and natural structures. These winds can roll over vehicles, collapse walls and blow over trees. The prevailing winds of a hurricane push a wall of water, called a storm surge, in front of it. If the storm surge happens to coincide with high tide, it causes beach erosion and significant inland flooding. The hurricane itself is often just the beginning. The storm's winds often spawn tornadoes, which are smaller, more intense cyclonic storms that cause additional damage. You can read more about them in How Tornadoes Work. The extent of hurricane damage doesn't just depend on the strength of the storm, but also the way it makes contact with the land. In many cases, the storm merely grazes the coastline, sparing the shores its full power. Hurricane damage also greatly depends on whether the left or right side of a hurricane strikes a given area. The right side of a hurricane packs more punch because the wind speed and the hurricane's speed of motion complement one another there. On the left side, the hurricane's speed of motion subtracts from the wind speed. This combination of winds, rain and flooding can level a coastal town and cause significant damage to cities far from the coast. In 1996, Hurricane Fran swept 150 miles (241 km) inland to hit Raleigh, N.C. Tens of thousands of homes were damaged or destroyed, millions of trees fell, power was out for weeks in some areas and the total damage was measured in the billions of dollars. Tracking a Hurricane To monitor and track the development and movement of a hurricane, meteorologists rely on remote sensing by satellites, as well as data gathered by specially equipped aircraft. On the ground, Regional Specialized Meteorological Centers, a network of global centers designated by the World Meteorological Organization, are charged with tracking and notifying the public about extreme weather. Weather satellites use different sensors to gather different types of information about hurricanes. They track visible clouds and air circulation patterns, while radar measures rain, wind speeds and precipitation. Infrared sensors also detect vital temperature differences within the storm, as well as cloud heights. The Hurricane Hunters are members of the 53rd Weather Reconnaissance Squadron/403rd Wing, based at Keesler Air Force Base in Biloxi, Miss. Since 1965, the Hurricane Hunters team has used the C-130 Hercules, a very sturdy turboprop plane to fly into tropical storms and hurricanes. The only difference between this plane and the cargo version is the specialized, highly sensitive weather equipment installed on the WC-130. The team can cover up to five storm missions per day, anywhere from the mid-Atlantic to Hawaii. The Hurricane Hunters gather information about wind speeds, rainfall and barometric pressures within the storm. They then relay this information back to the National Hurricane Center in Miami, Fla. If you're curious about these foolhardy pilots, read Why would someone fly an airplane into a hurricane? Meteorologists take all the storm data they receive and use it to create computer forecast models. Based on a great deal of current and past statistical data, these virtual storms allow scientists to forecast a hurricane's path and changes in intensity well in advance of landfall. With this data, governments and news agencies ideally can warn residents of coastal areas and greatly reduce the loss of life during a hurricane. Long-term forecasting now allows meteorologists to predict how many hurricanes will take place in an upcoming season and to study trends and patterns in global climate. While personifying a massive, destructive force certainly makes for a jazzier headline, the practice of naming hurricanes originated with meteorologists, not media outlets. Often more than one tropical storm is active at the same time, so what better way to tell them apart than by naming them? For several hundred years, residents of the West Indies often named hurricanes after the Catholic saint's day on which the storm made landfall. If a storm arrived on the anniversary of a previous storm, a number was assigned. For example, Hurricane San Felipe struck Puerto Rico on Sept. 13, 1876. Another storm struck Puerto Rico on the same day in 1928, so this storm was named Hurricane San Felipe the Second. During World War II, weather officials only gave hurricanes masculine names. These names closely followed radio code names for letters of the alphabet. This system, like the West Indian saints system, drew from a limited naming pool. In the early 1950s, weather services began naming storms alphabetically and with only feminine names. By the late 1970s, this practice was replaced with the equal opportunity system of alternating masculine and feminine names. The World Meteorological Organization (WMO) continues this practice to this day. The first hurricane of the season is given a name starting with the letter A, the second with the letter B and so on. As the storms affect varying portions of the globe, the naming lists draw from different cultures and nationalities. Hurricanes in the Pacific Ocean are assigned a different set of names than Atlantic storms. For example, the first hurricane of the 2001 hurricane season was a Pacific Ocean storm near Acapulco, Mexico, named Adolf. The first Atlantic storm of the 2001 season was named Allison. A list of names through 2011 is available from the National Hurricane Center. If a hurricane inflicts significant damage, a country affected by the storm can request that the name of the hurricane be "retired" by the WMO. A retired name can't be reissued to a tropical storm for at least 10 years. This helps to avoid public confusion and to simplify both historical and legal record keeping. Our modern understanding of hurricanes depends largely on a mere century's worth of scientific study and record keeping, but the storms have been dictating the course of human history for millennia. After all, they're a part of an atmospheric system that predates the human race by billions of years. While scientists are largely left to speculate about the strength of Mesozoic Era storms, geologists have discovered evidence of Iron Age hurricanes in layers of ground sediment. When storm surges wash over land and into lakes, they leave fans of sand behind. Scientists can carbon date organic materials above and below the sand to determine an approximate storm date. A team from Louisiana State University studied thousands of years worth of lake bed evidence and discovered that, over the past 3,400 years, a dozen Category 4 or higher hurricanes hit the area -- yet most of them occurred 1,000 years or more ago [source: Young]. Findings such as these allow scientists to better study long-term weather patterns and possibly make better sense of current climate trends. As far as human records go, the ancient Mayans of South America made some of the earliest mentions of hurricanes in their hieroglyphics. The centuries that follow are littered with accounts of hurricanes affecting the outcomes of wars, colonization efforts and an untold number of personal lives. Just to name a few, hurricane activity thwarted the following sea ventures through the destruction and scattering of ocean fleets: - The 1274 Mongol invasion of Japan - A 1559 attempt by the Spanish to recapture Florida - The French defense of a Floridian fort, subsequently lost to the Spanish in 1565 - The Spanish Armada's attack on England in 1588 - A 1640 Dutch attack on Cuba - British dominance over the French in the Caribbean Islands in 1780 Today, modern meteorology prevents most hurricanes from arriving unannounced, greatly decreasing the massive hurricane fatality rates of previous centuries. But even with advance warning, governments and the residents of coastal areas still have to properly prepare for the coming storms. Meanwhile, some experts look to the future with concern. Some point to periods of intense hurricane activity in Earth's past and worry that such trends may return. Others argue that global warming brought on by the increased production of greenhouse gasses will lead to larger hurricane zones and more powerful storms. After all, hurricanes thrive on warm, moist waters, and a warmer Earth could provide more sustenance for tropical storms. Explore the links on the next page to learn more about hurricanes and the Earth's weather, including a story about those crazy pilots who fly their planes into hurricanes. Should you get in your bathtub during a tornado? Read on to find out why — and why not. More Great Links - "Atmosphere." Britannica Student Encyclopædia. 2008. (Aug. 5, 2008)http://student.britannica.com/comptons/article-196868/atmosphere - Drye, Willie. "Hurricanes of History -- From Dinosaur Times to Today." National Geographic News. Jan. 28, 2005. (Aug. 19, 2008) http://news.nationalgeographic.com/news/2005/01/0128_050128_tv_hurricane.html - "Evolution of the atmosphere." Britannica Online Encyclopædia. 2008. (Aug. 8, 2008)http://www.britannica.com/EBchecked/topic/1424734/evolution-of-the-atmosphere - "The History of Hurricanes." Federal Emergency Management Agency. (Aug. 21, 2008)http://www.fema.gov/kids/hurr_hist.htm - "Hurricane Timeline: 1495 to 1800." South Florida Sun-Sentinel. 2008. (Aug. 21, 2008)http://www.sun-sentinel.com/news/weather/hurricane/sfl-hc-history-1495to1800,0,3354030.htmlstory - "Jet stream." Britannica Online Encyclopædia. 2008. (Aug. 8, 2008)http://www.britannica.com/EBchecked/topic/303269/jet-stream - "Lightning." Britannica Online Encyclopædia. 2008. (Aug. 8, 2008)http://www.britannica.com/EBchecked/topic/340767/lightning#default - Reynolds, Ross. "Cambridge Guide to Weather." Cambridge University Press. 2000. - Tarbuck, Edward and Frederick Lutgens. "Earth Science: Eleventh Edition." Pearson Prentice Hall. 2006. - Toothman, Jessika. "How Clouds Work." HowStuffWorks.com. May 5, 2008. (Aug. 8, 2008)http://science.howstuffworks.com/cloud.htm - "Tropical Cyclone." Britannica Online Encyclopædia. 2008. (Aug. 20, 2008)http://www.britannica.com/EBchecked/topic/606551/tropical-cyclone - Vogt, Gregory L. "The Atmosphere: Planetary Heat Engine." Twenty-First Century Books. 2007. - Wilson, Tracy V. "How the Earth Works." HowStuffWorks.com. April 21, 2006. (Aug. 8, 2008)http://science.howstuffworks.com/Earth.htm - Young, Emma. "Raiders of the lost storms." New Scientist. June 10, 2006. (Aug. 21, 2008)http://environment.newscientist.com/channel/earth/hurricane-season/mg19025551.300-raiders-of-the-lost-storms.html
http://science.howstuffworks.com/nature/natural-disasters/hurricane.htm/printable
4.40625
There are many different types of particles, with different particle sizes and properties. Atoms and molecules are called microscopic particles. Subatomic particles are particles that are smaller than atoms. The proton, the neutron, and the electron are subatomic particles. These are the particles which make atoms. The proton has a positive charge (a + charge). The neutron has a neutral charge. The electron has a negative charge (a - charge), and it is the smallest of these three particles. In atoms, there is a small nucleus in the center, which is where the protons and neutrons are, and electrons orbit the nucleus. Protons and neutrons are made up of quarks. Quarks are subatomic particles, but they are also elementary particles because we do not know if they are made up of even smaller particles. There are six different types of quarks. These are the up quark, the down quark, the strange quark, the charm quark, the bottom quark, and the top quark. A neutron is made of two down quarks and one up quark. The proton is made up of two up quarks and one down quark.
https://simple.wikipedia.org/wiki/Particles
4
| || | | "If we look we'll find 'em... the microbes are there. They're these little packages of secrets that are waiting to be opened." | - Anna-Louise Reysenbach Introduction Microbes flourish. Inside your gut, in the mucky soil of a marsh, in Antarctic ice, in the hot springs of Yellowstone, in habitats seemingly incompatible with life, microbes flourish. They were present on Earth 3.5 to 4 billion years ago, and they've been evolving and expanding into new environments ever since. Replicating quickly, exchanging genetic material with each other and with other organisms, bacteria and archaea have become ubiquitous. Not only are they everywhere, but these tiny organisms also manipulate the environments in which they live. Their presence has driven the development of new ecosystems - some of which allowed for the evolution of more complex organisms. Without microbes, the recycling of essential nutrients on Earth would halt. Microbes communicate; some generate the signals for the formation of metabolically diverse communities. Some use sophisticated signaling to establish complex relationships with higher organisms. In this unit we will examine examples of the broad diversity of microorganisms and consider their roles in various ecosystems, both natural and man-made. We will also discuss some of the practical applications that derive from the wealth of metabolic diversity that microorganisms possess. Let's start at the beginning ... three or four billion years ago. © Annenberg Foundation 2016. All rights reserved. Legal Policy
http://www.learner.org/courses/biology/textbook/microb/microb_1.html
4.09375
After reading this tutorial you might want to check out some of our other Mathematics Quizzes as well. In the first two tutorials, we have considered a random experiment that has only one characteristic and hence its outcome is a random variable X that assumes a single value. However, in the following tutorial, we will deal with random experiments having 2 (or more) characteristics and hence random variables X, Y (or more). Such random variables are called jointly distributed rvs. x1: height of a person x2: weight of a person x3: blood pressure of a person x4: sugar count of a personHence, x1 , x2, x3, x4 are jointly distributed. However, here we will consider a two dimensional random variable (X,Y). We will study the following cases: Also we shall study some other characteristics of jointly distributed random variables and transformation of random vectors. (For Discrete Variables X and Y) : Joint Probability, Probability mass function, Marginal Probability mass function, Conditional Probability Mass Function, Independence of events: (For Continuous Variables X and Y) : Probability density function, Marginal Probability Distribution Function, Conditional Probability Distribution Function, Independence of events: Properties common to both cases: Properties of CDF, Product Moments, Central moments, Non Central moments. Q:Find the relation between Geometric and Pascal distributions, Exponential and Gamma distributions. Q:Suppose a shopkeeper has 10 pens of a brand out of which 5 are good(G), 2 have defective inks(DI) and 3 have defective caps(DC). If 2 pens are selected at random, find the probability i. Not more than one is DI and not more than one is DC. ii. P(DI<2) Q:The joint probability mass function of (X, Y) is given by p(x, y) = k (4x + 4y), x = 1, 2, 3; y = 0, 1, 2. Find the (i) marginal distributions of Y (ii)P(X ≤ 2 | Y ≤ 1). Q:Check whether X and Y are independent: P(X=1, Y=1) = 1/4, P(X=1, Y=0) = 1/4, P(X=0, Y=1) = 1/4, P(X=0, Y=0) = 1/4 Q:A shopping mall has parking facility for both 2-wheelers and 4-wheelers. On a randomly selected day, let X and Y be the proportion of 2 and 4 wheelers respectively. The Joint pdf of X and Y are: f ( x, y ) = ( x + 2y ) * 2/3 ; 0 ≤ x ≤ 1; 0 ≤ y ≤ 1 =0 elsewhere i. Find the marginal densities of X and Y. ii. Find the probability that the proportion of two wheelers is less than half. Q:Prove the additive property of: Binomial distribution, Poisson distribution. Q:The amount of rainfall recorded in Jalna in June is a rv X and the amount in July is a rv Y. X and Y have a bivariate normal distribution. (X, Y) ~ (6, 4, 1, 0.25, 0.1) Find: (i) P(X ≤ 5) (ii) P(Y ≤ 5| X = 5) Q:Let X1,.....Xn be i.i.d with cdf F(x) and pdf f(x). Find the distribution of min and max of X. Complete Tutorial with Problems and Solutions :
http://www.thelearningpoint.net/home/mathematics/probability---part-3---joint-probability-bivariate-normal-distributions-functions-of-random-variable-transformation-of-random-vectors
4
- 1. In your own words, define inventing. Then do the following: - a. Explain to your merit badge counselor the role of inventors and their inventions in the economic development of the United States. - b. List three inventions and state how they have helped humankind. - 2. Do ONE of the following: - a. Identify and interview with a buddy (and with your parent’s permission and merit badge counselor’s approval) an individual in your community who has invented a useful item. Report what you learned to your counselor. - b. Read about three inventors. Select the one you find most interesting and tell your counselor what you learned. - 3. Do EACH of the following: - a. Define the term intellectual property. Explain which government agencies oversee the protection of intellectual property, the types of intellectual property that can be protected, how such property is protected, and why protection is necessary. - b. Explain the components of a patent and the different types of patents available. - c. Examine your Scouting gear and find a patent number on a camping item you have used. With your parent’s permission, use the Internet to find out more about that patent. Compare the finished item with the claims and drawings in the patent. Report what you learned to your counselor. - d. Explain to your counselor the term patent infringement. - 4. Discuss with your counselor the types of inventions that are appropriate to share with others, and explain why. Tell your counselor about one nonpatented or noncopyrighted invention and its impact on society. - 5. Choose a commercially available product that you have used on an overnight camping trip with your troop. Make recommendations for improving the product, and make a sketch that shows your recommendations. Discuss your recommendations with your counselor. - 6. Think of an item you would like to invent that would solve a problem for your family, troop, chartered organization, community, or a special-interest group. Then do EACH of the following, while keeping a notebook to record your progress: - a. Talk to potential users of your invention and determine their needs. Then, based on what you have learned, write a statement describing the invention and how it would help solve a problem. This statement should include detailed sketch of the invention. - b. Create a model of the invention using clay, cardboard, or any other readily available material. List the materials necessary to build a prototype of the invention. - c. Share the idea and the model with your counselor and potential users of your invention. Record their feedback in your notebook. - 7. Build a working prototype of the item you invented for requirement 6*. Test and evaluate the invention. Among the aspects to consider in your evaluation are cost, usefulness, marketability, appearance, and function. Describe how your original vision and expectations for your invention are similar or dissimilar to the prototype you built. Have your counselor evaluate and critique your prototype. - Before you begin building the prototype, you must have your counselor’s approval, based on the design and building plans you have already shared. - 8. Do ONE of the following: - a. Participate with a club or team (robotics team, science club, or engineering club) that builds a useful item. Share your experience with your counselor. - b. Visit a museum or exhibit dedicated to an inventor or invention, and create a presentation of your visit to share with a group such as your troop or patrol. - 9. Discuss with your counselor the diverse skills, education, training, and experience it takes to be an inventor. Discuss how you can prepare yourself to be creative and inventive to solve problems at home, in school, and in your community. Discuss three career fields that might utilize the skills of an inventor. | The official source for the information shown in this article or section is:| scoutingmagazine.org, 2015 Edition (BSA Supply SKU #620714)
http://www.meritbadge.org/wiki/index.php/Template:Inventing/req
4
In programming, a series of objects all of which are the same size and type. Each object in an array is called an array element. For example, you could have an array of integers or an array of characters or an array of anything that has a defined data type. The important characteristics of an array are: Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now. Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015 The most popular Webopedia definitions of 2015. Read More » This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More »
http://www.webopedia.com/TERM/A/array.html
4.09375
Perform an Indian folk song with the correct nuance (degrees of differences). Work cooperatively in a Describe the characteristics of Indian vocal music. Music is an essential ingredient in the lives of people in India. Folk and classical music accompany various styles of Indian dances. For Indians, music is a means to get touch with the Supreme Being. A song is an apparatus for the communication and an interaction process between the worshipper and deity. Through the clear expression of the rasa or dynamics, the singer feels the presence of Brahma, the creator. Vocal music in India is a way to express deep devotion to God. It is manifested through the art of vocalization which becomes more then just a vocal warm up but an act of Most of the Indian classical songs of Northern India are devotional but few are The most notable is the dhun or kirtan for the Hindus, bhajan, the shabad for Sikho and the Kawali (qawali) for the Muslims. Not all Indian music are serious. Gangal is one style which is known for its rich romantic and poetic content. The lakshan geet is a style which is oriented towards musical Swarmalika is used for pedagogic purposes. In style, sargam is used instead of words. Hymn to Shiva It is an example of an Indian song. There is an English translation of the The notations are easy to sing. The flat sign is used before some of the notes in order to sing with the correct It has four measures to the base clef. It has four beats. Sing the song and add the drone accompaniments vocally or on the Percussion instruments can also be used as an accompaniment to add color to the singing. Some Indian songs are used to describe the scenic beauty of a particular region in the country. This song Tamil Nad speaks of the beauty of the Land of the Tamils. Indian songs are also used to bid a person farewell. This song Vijaya means Deva – Dasi Dance This is another Indian composition which is highly rhythmical. The flow to the tone and rhythm is Clapping can be used in case there are no drums available. The notations of the songs from India are classified into two forms: The text of the songs is all about A particular slide catching your eye? Clipping is a handy way to collect important slides you want to go back to later.
http://www.slideshare.net/ElnaPanopio/indian-vocal-music
4.21875
Missouri Compromise 1820<br />1819 Missouri applied to join the Union as a slave state.<br />This gave the South a majority in the Senate.<br />Henry Clay proposed admitting Maine as a Free state to maintain the balance in Congress.<br />Future states from the LouisianaPurchase would be Free above the 3630’ N. Latitude and states below that line would be Slave.<br /> Slavery in the West<br />David Wilmot, Congressman from PA, submitted a bill, the WilmotProviso that would ban slavery in any of the territories gained from Mexico.<br />The bill passed in the House, but NOT in the Senate, leaving the question of slavery in the west unresolved.<br /> Opposing Views <br />Believed in extending the Missouri Compromise line or Popular Sovereignty<br />Believed slavery should not be restricted and slaves should be returned to their owners<br />Wanted to ban slavery throughout the entire country<br />Abolitionists<br />Southerners<br />Moderates<br /> Opposing Views<br />Wanted to ban slavery in all parts of the country<br />Abolitionists<br />Believed in extending the Missouri Compromise or Popular Sovereignty<br />Moderates<br />Believed that slavery should be allowed everywhere and runaways should be returned to their owners<br />Southerners<br /> Free-Soil Party 1848<br /><ul><li>Main goal was to keep slavery from spreading to the western territories. They did not look to ban slavery where it already existed. This was the first election where slavery was an important issue.</li></li></ul><li> Compromise of 1850<br />Chapter 16 Section 2<br /> Slavery Debate Erupts Again<br />California applies for statehood as a Free state in 1850.<br />Southerners feared that they would be out voted in the Senate and it was suggested that they should secede from the Union.<br />Like many northerners, Webster viewed slavery as evil. The breakup of the United States, however, he believed was worse. To save the Union, Webster was willing to compromise. He would support southern demands that northerners be forced to return fugitive slaves.<br /> John C. Calhoun refused to compromise insisting that fugitive slaves be returned to their owners.<br />Henry Clay feared that if a compromise was not reached the country would break apart.<br /> Compromise of 1850<br />Calhoun died and Clay became ill as Congress still debated the slavery issue.<br />Stephen Douglas of IL took up Clay’s fight for compromise to pass legislation to satisfy both North and South.<br />The compromise consisted of 5 separate components.<br /> Compromise of 1850<br />California is admitted as a free state.<br />Territories of New Mexico and Utah would uphold popular sovereignty.<br />Bans slave trade in Washington, D.C.<br />Settled the dispute over the Texas/New Mexico Border.<br />Passed the Fugitive Slave Act. <br />What does this mean for the Missouri Compromise?<br /> Fugitive Slave Act<br />Required all citizens to return fugitive, runaway, slaves to their owners.<br />Anyone who helped or allowed fugitives to escape would be fined $1,000 (equal to $25,480 today)<br />African Americans suspected to be a runaway was not allowed a trial by jury.<br />Judges were paid $10.00 ($250.00) for charging blacks as runaways and returned to the south. $5.00 ($125.00) for deciding they were free. <br /> Uncle Tom’s Cabin<br />A book written by Harriet Beecher Stowe in 1852.<br />Harriet Beecher Stowe lived along the Ohio River where many slaves crossed to get to freedom.<br />Book was fictitious, but based on the stories she heard from escaped slaves.<br />Gave people in the North a better understanding of what it meant to be a slave and saw slavery as a moral problem<br /> The Crisis Deepens<br />Chapter 16 Section 3<br /> Kansas Nebraska Act 1854<br />Compromise of 1850 nullified the Missouri Compromise, but only clarified how the slavery issue would be handled in the Mexican Cession.<br />So what about the Kansas and Nebraska territories?<br />Stephen Douglas proposed that both territories’ settlers decide whether slavery would be allowed in those territories upon applying for statehood. This is called…<br />Popular Sovereignty<br /> Predictions<br />What was the reaction to the Kansas Nebraska Act in the North?<br />“Opponents of slavery called the act a ‘criminal betrayal of precious rights.’ Slavery could now spread to areas that had been free for more than 30 years. Some northerners protested by openly challenging the Fugitive Slave Act.”<br />Do you think popular sovereignty will solve the issue of slavery?<br /> Crisis Turns Violent…<br />Initial settlers in the Kansas territory were from the neighboring states for the purpose of acquiring cheap land.<br />Few of these settlers owned slaves.<br />Under the Kansas Nebraska Act was the territory of Kansas going to enter the Union as a Free state or a Slave state?<br />It would be decided using Popular Sovereignty<br /> Crisis Turns Violent <br />To increase the number of slave owners in the Kansas territory Border Ruffians, proslavery settlers from Missouri rode across the border into Kansas to increase the number of slave owners in the territory.<br />These slave owners voted in the government elections and illegally voted in a proslavery government.<br />The original non-slave owners who initially settled the territory refused to obey the proslavery government and elected their own legislature.<br /> Bleeding Kansas<br />A proslavery band of men decided to attack a antislavery town of Lawrence, KS destroying homes and a Free-Soil Newspaper.<br />In retaliation, John Brown, an abolitionist, with his four sons, attacked the proslavery town Pottawatomie(paht uh waht uh mee) Creek.<br />In the middle of the night he dragged five proslavery settlers from their beds and murdered them.<br />This created both sides to use hit-and-run tactics, guerilla warfare, on the other, killing over 200 people.<br /> Violence in the Senate<br />Senator, Charles Sumner of MA, criticized the proslavery government of Kansas and verbally attacked proslavery southerners, specifically Andrew Butler.<br />Due to his age, his nephew, Congressmen Preston Brooks felt he couldn’t defend himself, so he marched on the Senate floor and beat Sumner over the head with a cane till he was bleeding and unconscious.<br /> Dred Scott v. Sanford <br />1857 antislavery lawyers submitted a lawsuit, alegalcase to settle a dispute, on behalf of a slave Dred Scott who’s owner had died.<br />His lawyers argued that, because his lawyer moved him to reside in IL and WI, both free states, he should be set free.<br /> Dred Scott Decision 1857<br />The Supreme Court decided: <br />1) that DredScott was property, therefore not a citizen he was incapable to filing a lawsuit to begin with. <br />2) According to the constitution, no citizen can be deprived of property thus, Congress did not have the power to outlaw slavery in any territory.<br /> “That the history of the nation during the last four years has fully established the propriety and necessity of the organization and perpetuation of the Republican Party and that the causes which called it into existence are permanent in their nature and now, more than ever before, demand its peaceful and constitutional triumph.”<br />The Republican Party Emerges<br />Chapter 16 Section 4<br /> Republican Party<br />Neither of the major political parties, Whigs or Democrats, would take a stand on the issue of slavery.<br />In 1854, Free Soilers, Northern Democrats, antislavery Whigs formed the Republican Party.<br />Their major goal was to keep the spread of slavery in the West. <br /> Abraham Lincoln, Republican<br />Lincoln entered the national political scene during his debates with Stephen Douglas in 1858 for the Illinois Senate.<br />During the series of debates Douglas supported popular sovereignty, Lincoln argued that slavery should not be allowed in the territories because the “House divided against itself could not stand.”<br /> John Brown Raid<br /><ul><li>Brown led a group of men to a federal arsenal, gun warehouse, at Harper’s Ferry, VA. He believed that once weapons were available slaves would join him and revolt against their owners. No revolt took place and he was arrested by troops commanded by Robert E. Lee.</li></li></ul><li>Hero or Villain?<br />Brown was found guilty of murder and treason, actions against one’s country.<br />Because he acted with such dignity through out his trial and his head held high many northerners considered him a martyr, someone who is willing to give their life for a cause.<br />Many southerners became convinced that the North wanted to destroy slavery. <br />Why?<br /> The Nation Divides<br />Chapter 16 Section 5<br /> Election of 1860<br />Setting the Scene: <br />Republican Convention Chicago, IL<br />“Fire the salute,” ordered the delegate. “Old Abe is nominated!” Amid the celebration, though, a delegate from Kentucky struck a somber note. “Gentlemen, we are on the brink of a great civil war.”<br /> Election of 1860<br />Abraham Lincoln – Republican<br />Prevent the spread of slavery in the western territories<br />Stephen Douglas – Northern Democrat<br />Refused to support slavery<br />John Breckinridge – Southern Democrat<br />Supported spread of slavery<br />John Bell – Constitutional Union <br />Moderate Southerner who wanted to keep the Union together<br /> Lincoln was able to gain a majority vote without even being listed on 10 of the Southern ballots.<br /> Southern Reaction<br />South believed that when Lincoln took office he would abolish slavery.<br />The South no longer had a voice in the federal government and congress, as well as the President, was against their interests – slavery.<br />Governor of SC, Francis W. Pickens, wrote to other southern states that it was their duty to secede from the Union.<br />SC seceded from the Union December 20, 1860.<br /> Confederate States of America<br />By February 1861 the following states made up the Confederacy:<br />South Carolina (first to secede)<br />Alabama, Florida, Georgia, Louisiana, Mississippi, and Texas .<br />At a convention held in Montgomery, AL, Jefferson Davis was appointed their president.<br />Davis served in Mexican War, as senator from MS, supporter of state’s rights, Secretary of War under President Pierce.<br /> The Right to Secede?<br />Most southerners believed that they had every right to secede. After all, the Declaration of Independence said that “it is the right of the people to alter or to abolish” a government that denies the rights of its citizens. <br />Lincoln, they believed, would deny white southerners the right to own slaves.<br /> Civil War Begins<br />Lincoln took the oath of office on March 4, 1861.<br />Lincoln’s First Inaugural Address.<br />April 1861, Fort Sumter <br />Federal fort off the coast of South Carolina is in need of food and supplies.<br />Lincoln informs the governor that he is shipping food and not weapons or ammunition<br />As part of the Confederacy, SC could not allow the Union to have control within its borders.<br />Confederate soldiers bombarded the fort with shells forcing the Union to surrender on April 13, 1861.<br /> A particular slide catching your eye? Clipping is a handy way to collect important slides you want to go back to later.
http://www.slideshare.net/thstoutenburg/slavery-divides-a-nation-chapter-16
4.03125
Christopher ColumbusResearch Paper Christopher Columbus and over other 25,000+ free term papers, essays and research papers examples are available on the website! Autor: Sretieff12345 • October 11, 2012 • 943 Words (4 Pages) • 636 Views Christopher Columbus', voyage during 1492-1502 altered the course of European history as we know it today. During his voyages, throughout the West Indies, "Christopher Columbus paved the way for others to conquer and settle the new land in the name of the Spanish crown". Although at least "ten different powers would eventually play a role in settling the Caribbean", the British, French, and the Dutch would compete mostly with the Spanish. The Dutch were interested in trading, and would eventually introduce the British and French to the plantation systems of sugarcane production. The British, and French were interested in colonization and would later have their colonies forced to trade only with their mother countries. To portray Columbus' influence on European history I will concentrate on the influences he had on the Dutch's wants and needs compared to that of the British and the French. Columbus represented a culture that was expanding its power. European countries were exploring to gain more access to natural resources in different parts of world in order to increase their authority in comparison to the competitors. The riches of this new world attracted other European powers. The British, Dutch and French challenged Spain's monopoly in the 17th century. Columbus and his people used "piracy, smuggling, and outright war to take over lands and set up their own colonies". It was the Dutch, for example, that captured "Guiana, and the British captured St. Kitts, Barbados and Jamaica from Spain". First, the Dutch were accredited with the cultivation of sugarcane in their early Brazilian colonies. As a result of producing sugarcane, they set up trading centers on the few small islands they had settled in. During the Dutch period 1570 to 1678 "the Dutch shipping industry became the main provider of supplies and slaves for the other Caribbean colonies" and became the primary resource for sugar. Today, the "only reminders of Dutch activities in the Caribbean are Suriname and 6 small island possessions in the Lesser Antilles". In 1640, Close to the end of the Dutch period, the Dutch introduced British and French colonialists to the production of sugarcane. By the middle of the "17th century, some British pirates settled among logwood forests on the coast of the Bay of Honduras, which later became the Settlement of Belize. The French were also settling in North America and the Caribbean". Within Europe the "increasing market for sugar ensured the colonies an early success", and opened a new way of conducting business with their mother countries. During the 18th century, the French and British fought for domination over the "New World". "The British took control of more and more territories in the Caribbean and by the 19th century were the major power in the Caribbean. Through this era, the "British
https://www.otherpapers.com/History-Other/Christopher-Columbus/35247.html
4.03125
|Languages:||English, Munsee, and Unami| |Religions:||Christianity, Native American Church,| traditional tribal religion |Related:||Other Algonquian peoples| The Lenape are a Native American tribe and First Nations band government. They are also called Delaware Indians and their historical territory was along the Delaware River watershed, western Long Island and the Lower Hudson Valley. Most Lenape were pushed out of their Delaware homeland during the 18th century by expanding European colonies, exacerbated by losses from intertribal conflicts. Lenape communities were weakened by newly introduced diseases, mainly smallpox, and violent conflict with Europeans. Iroquois people occasionally fought the Lenape. Surviving Lenape moved west into the upper Ohio River basin. The American Revolutionary War and United States' independence pushed them further west. In the 1860s, the United States government sent most Lenape remaining in the eastern United States to the Indian Territory (present-day Oklahoma and surrounding territory) under the Indian removal policy. In the 21st century, most Lenape now reside in the US state of Oklahoma, with some communities living also in Wisconsin, Ontario (Canada) and in their traditional homelands. Lenape kinship system has matrilineal clans, that is, children belong to their mother's clan, from which they gain social status and identity. The mother's eldest brother was more significant as a mentor to the male children than was their father, who was of another clan. Hereditary leadership passed through the maternal line, and women elders could remove leaders of whom they disapproved. Agricultural land was managed by women and allotted according to the subsistence needs of their extended families. Families were matrilocal; newlywed couples would live with the bride's family, where her mother and sisters could also assist her with her growing family. Lenni-Lenape (or Lenni-Lenapi) comes from their autonym, Lenni, which may mean "genuine, pure, real, original," and Lenape, meaning "Indian" or "man". (cf. Anishinaabe.) Alternately, lënu may be translated as "man." The Lenape, when first encountered by whites, were a loose association of related peoples who spoke similar languages and shared familial bonds in an area known as Lenapehoking, the Lenape traditional territory, which spanned what is now eastern Pennsylvania, New Jersey, southern New York, and eastern Delaware. The tribe's other name, "Delaware," is not of Native American origin. English colonists named the Delaware River for the first governor of Virginia, Thomas West, 3rd Baron De La Warr, whose title was ultimately derived from French. (For etymology of the surname, see Earl De La Warr§Etymology.) The English then began to call the Lenape the Delaware Indians because of where they lived. Swedes also settled in the area, and early Swedish sources listed the Lenape as the Renappi. See main article: Lenapehoking. Traditional Lenape lands, the Lenapehoking, encompassed the Delaware Valley of eastern Pennsylvania and western New Jersey from the Lehigh River south into eastern Delaware and the Delaware Bay, western Long Island, New York Bay, and the Lower Hudson Valley in New York. The Lenape lived in numerous small towns along the rivers and streams that fed the waterways. The Unami and Munsee languages belong to the Eastern Algonquian language group. Although the Unami and Munsee speakers people are related, they consider themselves as distinct, as they used different words and lived on opposite sides of the Kitatinny Mountains of modern Pennsylvania. Today, only elders speak the language - although some young Lenape youth and adults learn the ancient language. The German and English-speaking Moravian missionary John Heckewelder wrote: "The Monsey tong [sic] is quite different even though [it and Lenape] came out of one parent language." William Penn, who first met the Lenape in 1682, stated that the Unami used the following words: "mother" was anna, "brother" was "isseemus," "friend" was netap. Penn instructed his fellow Englishmen: “If one asks them for anything they have not, they will answer, mattá ne hattá,” which to translate is, not I have, instead of I have not." According to the Moravian missionary David Zeisberger, the Unami word for "food" is May-hoe-me-chink; in Munsee it is Wool-as-gat. The Unami word for "hill" is Ah-choo; in Munsee it is Watts Unk. Sometimes the languages shared words, such as "corn," which is Xash-queem, or "wolf," which is too-may. In contemporary Unami orthography, food is michëwakàn; hill is ahchu; corn is xàskwim; and wolf is tëme. Zeisberger and Heckewelder lived among the Unami and Munsee people in Pennsylvania and Ohio during the late-18th and early-19th centuries and interviewed them. David Zeisberger wrote A Lenâpé-English Dictionary: From An Anonymous [Manuscript] In The Archives Of The Moravian Church At Bethlehem, [Pennsylvania], David Zeisberger's History of Northern American Indians, The Diary of David Zeisberger: A Moravian Missionary Among the Ohio Indians, Grammar of the Language of the Lenni Lenape or Delaware Indians, and Zeisberger’s Indian Dictionary: English, German, Iroquois—The Onondaga and Algonquin—The Delaware. The "Delaware" that Zeisberger translated is Munsee, and not Unami. John Heckewelder wrote extensively on the Lenape in his History, Manners, and Customs of the Indian Nations Who Once Inhabited Pennsylvania and Neighboring States, as well as The Names Which the Lenni Lenape or Delaware Indians Gave to Rivers, Streams, and Localities. At the time of first European contact, a Lenape individual would have identified primarily with his or her immediate family and clan, friends, and/or village unit; then with surrounding and familiar village units; next with more distant neighbors who spoke the same dialect; and ultimately, with all those in the surrounding area who spoke mutually comprehensible languages, including the Nanticoke people, who lived to their south and west in present western Delaware and eastern Maryland, and the Munsee, who lived to their north. Among many Algonquian peoples along the East Coast, the Lenape were considered the "grandfathers" from whom other Algonquian-speaking peoples originated. Consequently, in inter-tribal councils, the Lenape were given respect as one would to elders. Lenape has three phratries, which in turn had twelve clans. These are: By 1682, when William Penn arrived to his American commonwealth, the Lenape had been so reduced by disease, famine, and war that the sub-clan mothers had reluctantly resolved to consolidate their families into the main clan family. This is why William Penn and all those after him believed that the Lenape clans had always only had three divisions ('Turtle, Turkey, and Wolf) when, in fact, they had over thirty on the eve of European contact. For example, some time between 1650 and 1680, the Bear, Deer, etc. families, with few members left, absorbed into the leading Wolf Family. Members of each clan were found throughout Lenape territory and clan lineage was traced through the mother. While clan mothers controlled the land, the houses, and the families, the clan fathers provided the meat, cleared the fields, built the houses, and protected the clan. Upon reaching adulthood, a Lenape male would marry outside of his clan, a practice known by ethnographers as, "exogamy". The practice effectively prevented inbreeding, even among individuals whose kinship was obscure or unknown. This means that a male from the Turkey Clan was expected to marry a female from either the Turtle or Wolf clans. His children, however, would not belong to the Turkey Clan, but to the mother's clan. As such, a person's mother's brothers (the person's matrilineal uncles) played a large role in his or her life as they shared the same clan lineage. To add clarity to the clan system, all males, as a part of their passage rites into adulthood, were tattooed with their clan symbol on their chests. This is why many English, Dutch, and Swedish traders believed that the Lenape had three or more tribes, when in fact, they were one nation of kindred people. Those of a different language stock, such as the Iroquois (or, in the Unami language, the Maax-waas Len [Bear People] or Minquas), were regarded as foreign. As in the case of the Iroquois, the animosity of difference and competition spanned many generations, and different language tribes became traditional enemies. Ethnicity seems to have mattered little to the Lenape and many other "tribes". Archaeological excavations have found Lenape burials that included identifiably ethnic Iroquois remains interred along with those of Lenape. The two groups were bitter enemies since before recorded history, but intermarriage occurred. In addition, both tribes practiced adopting young captives from warfare into their tribes and assimilating them as full tribal members. Early Europeans who first wrote about Indians found matrilineal social organization to be unfamiliar and perplexing. Because of this, Europeans often tried to interpret Lenape society through more familiar European arrangements. As a result, the early records are full of clues about early Lenape society, but were usually written by observers who did not fully understand what they were seeing. For example, a man's maternal uncle (his mother's brother), and not his father, was usually considered to be his closest male ancestor, since his uncle belonged to his mother's clan and his father belonged to a different one. The maternal uncle played a more prominent role in the lives of his sister's children than did the father. Early European chroniclers did not understand this concept. The band assigned land of their common territory to a particular clan for hunting, fishing, and cultivation. Individual private ownership of land was unknown, as the land belonged to the clan collectively while they inhabited it, but women often had rights to traditional areas for cultivation. Clans lived in fixed settlements, using the surrounding areas for communal hunting and planting until the land was exhausted. In a common practice known as "agricultural shifting", the group then moved to found a new settlement within their territory. The Lenape practiced large-scale agriculture to augment a mobile hunter-gatherer society in the regions around the Delaware River. The Lenape were largely a sedentary people who occupied campsites seasonally, which gave them relatively easy access to the small game that inhabited the region: fish, birds, shellfish and deer. They developed sophisticated techniques of hunting and managing their resources. By the time of the arrival of Europeans, the Lenape were cultivating fields of vegetation through the slash and burn technique. This extended the productive life of planted fields. They also harvested vast quantities of fish and shellfish from the bays of the area, and, in southern New Jersey, harvested clams year-round. The success of these methods allowed the tribe to maintain a larger population than nomadic hunter-gatherers could support. Scholars have estimated that at the time of European settlement, there may have been about 15,000 Lenape total in approximately 80 settlement sites around much of the New York City area, alone. In 1524 Lenape in canoes met Giovanni da Verrazzano, the first European explorer to enter New York Harbor. At the time of European contact, the Lenape practiced agriculture, mostly companion planting. The women cultivated many varieties of the "Three Sisters:" corn, beans, and squash. The men also practiced hunting and the harvesting of seafood. The people were primarily sedentary rather than nomadic; they moved to seasonal campsites for particular purposes such as fishing and hunting. European settlers and traders from the seventeenth-century colonies of New Netherland and New Sweden traded with the Lenape for agricultural products, mainly maize, in exchange for iron tools. The Lenape also arranged contacts between the Minquas or Susquehannocks and the Dutch and Swedish West India companies to promote the fur trade. The Lenape were major producers of wampum or shell beads, which they traditionally used for ritual purposes and as ornaments. After the Dutch arrival, they began to exchange wampum for beaver furs provided by Iroquoian-speaking Susquehannock and other Minquas. They exchanged these furs for Dutch and, from the late 1630s, also Swedish imports. Relations between some Lenape and Minqua polities briefly turned sore in the late 1620s and early 1630s, but were relatively peaceful most of the time. The early European settlers, especially the Dutch and Swedes, were surprised at the Lenape's skill in fashioning clothing from natural materials. In hot weather both men and women wore only loin cloth and skirt respectively, while they used beaver pelts or bear skins to serve as winter mantles. Additionally, both sexes might wear buckskin leggings and moccasins in cold weather. Deer hair, dyed a deep scarlet, was a favorite component of headdresses and breast ornaments for males. The Lenape also adorned themselves with various ornaments made of stone, shell, animal teeth, and claws. The women often wore headbands of dyed deer hair or wampum. They painted their skin skirts or decorated them with porcupine quills. These skirts were so elaborately appointed that, when seen from a distance, they reminded Dutch settlers of fine European lace. The winter cloaks of the women were striking, fashioned entirely from the iridescent body feathers of wild turkeys. The first recorded contact with Europeans and people presumed to have been the Lenape was in 1524. The explorer Giovanni da Verrazzano was greeted by local Lenape who came by canoe, after his ship entered what is now called Lower New York Bay. The early interaction between the Lenape and Dutch traders in the 17th century was primarily through the fur trade; specifically, the Lenape trapped and traded beaver pelts for European-made goods. According to Dutch settler Isaac de Rasieres, who observed the Lenape in 1628, the Lenape's primary crop was maize, which they planted in March. They quickly adopted European metal tools for this task. In May, the Lenape planted kidney beans near the maize plants; the latter served as props for the climbing bean vines. They also planted squash, whose broad leaves cut down on weeds and conserved moisture in the soil. The women devoted their summers to field work and harvested the crops in August. Women cultivated varieties of maize, squash and beans, and did most of the fieldwork, processing and cooking of food. The men limited their agricultural labor to clearing the field and breaking the soil. They primarily hunted and fished during the rest of the year. Dutch settler David de Vries, who stayed in the area from 1634 to 1644, described a Lenape hunt in the valley of the Achinigeu-hach (or "Ackingsah-sack," the Hackensack River), in which one hundred or more men stood in a line many paces from each other, beating thigh bones on their palms to drive animals to the river, where they could be killed easily. Other methods of hunting included lassoing and drowning deer, as well as forming a circle around prey and setting the brush on fire. At the time of sustained European contact in the 16th centuries and 17th centuries, the Lenape were a powerful Native American nation who inhabited a region on the mid-Atlantic coast spanning the latitudes of southern Massachusetts to the southern extent of Delaware in what anthropologists call the Northeastern Woodlands. Although never politically unified, the confederation of the Delaware roughly encompassed the area around and between the Delaware and lower Hudson rivers, and included the western part of Long Island in present-day New York. Some of their place names, such as Manhattan, Raritan, and Tappan were adopted by Dutch and English colonists to identify the Lenape people that lived there. Based on the historical record of the mid-seventeenth century, it has been estimated that most Lenape polities consisted of several hundred people but it is conceivable that some had been considerably larger prior to close contact, given the wars between the Susquehannocks and the Iroquois, both of whom were armed by the Dutch fur traders, while the Lenape were at odds with the Dutch and so lost that particular arms race. Smallpox devastated native communities even located far from European settlements by the 1640s. The Lenape and Susquehannocks fought a war in the middle of the 17th century that left the Delaware a tributary state even as the Susquehannocks had defeated the Province of Maryland between 1642-50s, New Amsterdam was founded in 1624 by the Dutch in what would later become New York City. Dutch settlers also founded a colony at present-day Lewes, Delaware on June 3, 1631 and named it Zwaanendael (Swan Valley). The colony had a short life, as in 1632 a local band of Lenape killed the 32 Dutch settlers after a misunderstanding escalated over Lenape defacement of the insignia of the Dutch West India Company. In 1634, the Iroquoian-speaking Susquehannock went to war with the Lenape over access to trade with the Dutch at New Amsterdam. They defeated the Lenape, and some scholars believe that the Lenape may have become tributaries to the Susquehannock. After the warfare, the Lenape referred to the Susquehannock as "uncles." The Iroquois added the Lenape to the Covenant Chain in 1676; the Lenape were tributary to the Five Nations (later Six) until 1753, shortly before the outbreak of the French and Indian War (a part of the Seven Years' War in Europe). The Lenape's quick adoption of trade goods, and their need to trap furs to meet high European demand, resulted in their disastrous over-harvesting of the beaver population in the lower Hudson Valley. With the fur sources exhausted, the Dutch shifted their operations to present-day upstate New York. The Lenape who produced wampum in the vicinity of Manhattan Island temporarily forestalled the negative effects of the decline in trade. Lenape population fell sharply during this period, due to high fatalities from epidemics of infectious diseases carried by Europeans, such as measles and smallpox, to which they had no natural immunity, as the diseases had arisen on the Asian continent and moved west into Europe, where they had become endemic in the cities. The Lenape had a culture in which the clan and family controlled property. Europeans often tried to contract for land with the tribal chiefs, confusing their culture with that of neighboring tribes such as the Iroquois. The Lenape would petition for grievances on the basis that not all their families had been recognized in the transaction (not that they wanted to "share" the land). After the Dutch arrival in the 1620s, the Lenape were successful in restricting Dutch settlement until the 1660s to Pavonia in present-day Jersey City along the Hudson. The Dutch finally established a garrison at Bergen, which allowed settlement west of the Hudson within the province of New Netherland. This land was purchased from the Lenape after the fact. In 1682, William Penn and Quaker colonists created the English colony of Pennsylvania beginning at the lower Delaware River. A peace treaty was negotiated between the newly arriving English and Lenape at what is now known as Penn Treaty Park. In the decades immediately following, some 20,000 new colonists arrived in the region, putting pressure on Lenape settlements and hunting grounds. Although Penn endeavored to live peaceably with the Lenape and to create a colony that would do the same, he also expected his authority and that of the colonial government to take precedence. His new colony effectively displaced many Lenape and forced others to adapt to new cultural demands. Penn gained a reputation for benevolence and tolerance, but his efforts resulted in more effective colonization of the ancestral Lenape homeland than previous ones. William Penn died in 1718. His heirs, John and Thomas Penn, and their agents were running the colony, and had abandoned many of the elder Penn's practices. Trying to raise money, they contemplated ways to sell Lenape land to colonial settlers. The resulting scheme culminated in the so-called Walking Purchase. In the mid-1730s, colonial administrators produced a draft of a land deed dating to the 1680s. William Penn had approached several leaders of Lenape polities in the lower Delaware to discuss land sales further north. Since the land in question did not belong to their polities, the talks came to nothing. But colonial administrators had prepared the draft that resurfaced in the 1730s. The Penns and their supporters tried to present this draft as a legitimate deed. Lenape leaders in the lower Delaware refused to accept it. According to historian Steven Harper, what followed was a "convoluted sequence of deception, fraud, and extortion orchestrated by the Pennsylvania government that is commonly known as the Walking Purchase." In the end, all Lenape who still lived on the Delaware were driven off the remnants of their homeland under threats of violence. Some Lenape polities eventually retaliated by attacking Pennsylvania settlements. When they fought British colonial expansion to a standstill at the height of the Seven Years' War, the British government investigated the causes of Lenape resentment. The British asked William Johnson, Superintendent of Indian Affairs, to lead the investigation. Johnson had become wealthy as a trader and acquired thousands of acres of land in the Mohawk River Valley from the Iroquois Mohawk of New York. Beginning in the 18th century, the Moravian Church established missions among the Lenape. The Moravians required the Christian converts to share their pacifism, as well as to live in a structured and European-style mission village. Moravian pacifism and unwillingness to take loyalty oaths caused conflicts with British authorities, who were seeking aid against the French and their Native American allies during the French and Indian War (Seven Years' War). The Moravians' insistence on Christian Lenapes' abandoning traditional warfare practices alienated mission populations from other Lenape and Native American groups, who revered warriors. The Moravians accompanied Lenape relocations to Ohio and Canada, continuing their missionary work. The Moravian Lenape who settled permanently in Ontario after the American Revolutionary War were sometimes referred to as "Christian Munsee", as they mostly spoke the Munsee branch of the Delaware language. During the French and Indian War, the Lenape initially sided with the French, as they hoped to prevent further British colonial encroachment in their territory. But, such leaders as Teedyuscung in the east and Tamaqua in the vicinity of modern Pittsburgh shifted to building alliances with the English. After the end of the war, however, Anglo-American settlers continued to kill Lenape, often to such an extent that the historian Amy Schutt writes the dead since the wars outnumbered those killed during the war. The Treaty of Easton, signed in 1758 between the Lenape and the Anglo-American colonists, required the Lenape to move westward, out of present-day New York and New Jersey and into Pennsylvania, then Ohio and beyond. Sporadically they continued to raid European-American settlers from far outside the area. In 1763 Bill Hickman, Lenape, warned English colonists in the Juniata River region of an impending attack. Many Lenape joined in Pontiac's War, and were numerous among those Native Americans who besieged Pittsburgh. In April 1763 Teedyuscung was killed when his home was burned. His son Captain Bull responded by attacking settlers from New England who had migrated to the Wyoming Valley of Pennsylvania. The settlers had been sponsored by the Susquehanna Company. The Lenape were the first Indian tribe to enter into a treaty with the new United States government, with the Treaty of Fort Pitt signed in 1778 during the American Revolutionary War. By then living mostly in the Ohio Country, the Lenape supplied the Continental Army with warriors and scouts in exchange for food supplies and security. During the American Revolution, the Munsee-speaking Lenape (then called Delaware) bands of the Ohio Country were deeply divided over which side, if any, to take in the conflict. Their bands lived in numerous villages around their main village of Coshocton. At the time of the Revolutionary War, the Lenape villages lay between the western frontier strongholds of the British and the Patriots: the American colonists had Fort Pitt (present-day Pittsburgh) and the British with Indian allies controlled the area of Fort Detroit (in present-day Michigan). Some Lenape decided to take up arms against the American colonials and moved to the west, closer to Detroit, where they settled on the Scioto and Sandusky rivers. Those Lenape sympathetic to the United States remained at Coshocton, and leaders signed the Treaty of Fort Pitt (1778) with the Americans. Through this, the Lenape hoped to establish the Ohio Country as a state inhabited exclusively by Native Americans, as part of the new United States. A third group of Lenape, many of them converted Christian Munsees, lived in several mission villages run by Moravians. (They spoke the Munsee branch of Delaware, an Algonquian language.) White Eyes, the Lenape chief who had negotiated the treaty, died in 1778. Many Lenape at Coshocton eventually joined the war against the Americans. In response, Colonel Daniel Brodhead led an expedition out of Fort Pitt and on 19 April 1781 destroyed Coshocton. Surviving residents fled to the north. Colonel Brodhead convinced the militia to leave the Lenape at the Moravian mission villages unmolested, since they were unarmed non-combatants. Brodhead's having to restrain the militia from attacking the Moravian villages was a reflection of the brutal nature of frontier warfare. Violence had escalated on both sides. Relations between regular Continental Army officers from the East (such as Brodhead) and western militia were frequently strained. The tensions were worsened by the American government's policy of recruiting some Indian tribes as allies in the war. Western militiamen, many of whom had lost friends and family in Indian raids against settlers' encroachment, blamed all Indians for the acts of some. During the early 1770s, missionaries, including David Zeisberger and John Heckewelder, arrived in the Ohio Country near the Delaware villages. The Moravian Church sent these men to convert the natives to Christianity. The missionaries established several missions, including Gnadenhutten, Lichtenau, and Schoenbrunn. The missionaries asked that the natives forsake all of their traditional customs and ways of life. Many Delaware did adopt Christianity, but others refused to do so. The Delaware became a divided people during the 1770s, including in Killbuck's family. Killbuck resented his grandfather for allowing the Moravians to remain in the Ohio Country. The Moravians believed in pacifism, and Killbuck believed that every convert to the Moravians deprived the Delaware of a warrior to stop further white settlement of their land. During the French and Indian War, Killbuck assisted the English against their French enemy. In 1761, Killbuck led an English supply train from Fort Pitt to Fort Sandusky. The British paid him one dollar per day. Later Killbuck became a leader in a very dangerous time for the Delaware. The American Revolution had just begun, and Killbuck found his people caught between the English in the West and the Americans in the East. At the war's beginning, Killbuck and many Delaware claimed to be neutral. In 1778, Killbuck permitted American soldiers to traverse Delaware territory so that the soldiers could attack Fort Detroit. In return, Killbuck requested that the Americans build a fort near the natives' major village of Coshocton to provide the Delaware with protection from English attacks. The Americans agreed and built Fort Laurens, which they garrisoned. Other Indian groups, especially the Wyandot, the Mingo, the Munsee, the Shawnee, and the Wolf Clan of the Delaware, favored the British. They believed that by their proclamation of 1763, restricting Anglo-American settlement to east of the Appalachian Mountains, that the British would help them preserve a Native American territory. The British planned to attack Fort Laurens in early 1779 and demanded that the neutral Delawares formally side with the British. Killbuck warned the Americans of the planned attack. His actions helped save the fort, but the Americans abandoned it in August 1779. The Delaware had lost their protectors and, in theory, faced attacks from the British, their native allies, and the American settlers who flooded into the area in the late 1770s and early 1780s after the war. Most Delaware formally joined the British after the American withdrawal from Fort Laurens. Facing pressure from the British, the Americans, and even his fellow natives, Killbuck hoped a policy of neutrality would save his people from destruction. It did not. The amateur anthropologist Silas Wood published a book claiming that there were several American Indian tribes that were distinct to Long Island, New York. He collectively called them the Metoac. Modern scientific scholarship has shown that two linguistic groups representing two Algonquian cultural identities lived on the island, not "13 individual tribes" as asserted by Wood. The bands to the west were Lenape. Those to the east were more related culturally to the Algonquian tribes of New England across Long Island Sound, such as the Pequot. Wood (and earlier settlers) often misinterpreted the Indian use of place names for identity as indicating their names for "tribes." Over a period of 176 years, European settlers progressively crowded the Lenape out of the East Coast and Ohio, and pressed them to move further west. Most members of the Munsee-language branch of the Lenape left the United States after the British were defeated in the American Revolutionary War. Their descendants live on three Indian reserves in Western Ontario, Canada. They are descendants of those Lenape of Ohio Country who sided with the British during the Revolutionary War. The largest reserve is at Moraviantown, Ontario, where the Turtle Phratry settled in 1792 following the war. Two groups migrated to Oneida County, New York by 1802, the Brotherton Indians of New Jersey and the Stockbridge-Munsee. After 1819, they removed to Wisconsin, under pressure from state and local governments. By the Treaty of St. Mary’s, signed October 3, 1818 in St. Mary's, Ohio, the Delaware ceded their lands in Indiana for lands west of the Mississippi and an annuity of $4,000. Over the next few years, the Delaware settled on the James River in Missouri near its confluence with Wilsons Creek, occupying eventually about 40000acres of the approximately 2000000acres allotted to them. Anderson, Indiana is named after Chief William Anderson, whose father was Swedish. The Delaware Village in Indiana was called Anderson's Town, while the Delaware Village in Missouri on the James River was often called Anderson’s Village. The tribes' cabins and cornfields were spread out along the James River and Wilsons Creek. Many Delaware participated in exploration of the western United States, working as trappers with the mountain men, and as guides and hunters for wagon trains. They served as army guides and scouts in events such as the Second Seminole War, Frémont's expeditions, and the conquest of California during the Mexican-American War. Occasionally, they played surprising roles as Indian allies. Sagundai accompanied one of Frémont's expeditions as one of his Delaware guides. From California, Fremont needed to communicate with Senator Benton. Sagundai volunteered to carry the message through some 2,200 kilometres of hostile territory. He took many scalps in this adventure, including that of a Comanche with a particularly fine horse, who had outrun both Sagundai and the other Comanche. Sagundai was thrown when his horse stepped into a prairie-dog hole, but avoided the Comanche's lance, shot the warrior dead, and caught his horse and escaped the other Comanche. When Sagundai returned to his own people in present-day Kansas, they celebrated his exploits with the last war and scalp dances of their history. These were held at Edwardsville, Kansas. By the terms of the "Treaty of the James Fork" made September 24, 1829 and ratified by the US Senate in 1830, the Delaware were forced to move further west. They were granted lands in Indian Territory in exchange for lands on the James Fork of the White River in Missouri. These lands, in what is now Kansas, were west of the Missouri and north of the Kansas River. The main reserve consisted of about 1000000acres with an additional "outlet" strip 10miles wide extending to the west. In 1854 Congress passed the Kansas–Nebraska Act, which created the Territory of Kansas and opened the area for white settlement. It also authorized negotiation with Indian tribes regarding removal. The Delaware were reluctant to negotiate for yet another relocation, but they feared serious trouble with white settlers, and conflict developed. As the Delaware were not considered United States citizens, they had no access to the courts, and no way to enforce their property rights. The United States Army was to enforce their rights to reservation land after the Indian Agent had both posted a public notice warning trespassers and served written notice on them, a process generally considered onerous. Major B.F. Robinson, the Indian Agent appointed in 1855, did his best, but could not control the hundreds of white trespassers who stole stock, cut timber, and built houses and squatted on Delaware lands. By 1860 the Delaware had reached consensus to leave Kansas, which was in accord with the government's Indian removal policy. The main body of Lenape arrived in Indian Territory in the 1860s. As a result of the multiple removals, each leaving some Lenape who chose to stay in place, Lenape people and descendents are located today in New Jersey, Wisconsin and southwest Oklahoma. The two largest groups are the Delaware Nation (Anadarko, Oklahoma) and the Delaware Tribe of Indians (Bartlesville, Oklahoma). The Delaware Tribe of Indians were required to purchase land from the reservation of the Cherokee Nation; they made two payments totaling $438,000. A court dispute followed over whether the sale included rights for the Delaware as citizens within the Cherokee Nation. While the dispute was unsettled, the Curtis Act of 1898 dissolved tribal governments and ordered the allotment of communal tribal lands to individual households of members of tribes. After the lands were allotted in 160-acre (650,000 m²) lots to tribal members in 1907, the government sold "surplus" land to non-Indians. The Delaware migrated into Texas in the late 18th and early 19th centuries. Elements of the Delaware migrated from Missouri into Texas around 1820, settling around the Red River and Sabine River. The Delaware were peaceful and shared their territory in Spanish Texas with the Caddo and other immigrating bands, as well as with the Spanish and ever-increasing American population. This peaceful trend continued after Mexico won their independence from Spain in 1821. In 1828, Mexican General Manuel de Mier y Terán made an inspection of eastern Mexican Texas and estimated that the region housed between 150 to 200 Delaware families. The Delaware requested Mier y Terán to issue them land grants and send teachers, so they might learn to read and write the Spanish language. The General, impressed with how well they had adapted to the Mexican culture, sent their request to Mexico City, but the authorities never granted the Delaware any legal titles. The situation changed when the Texas Revolution began in 1835. Texas officials were eager to gain the support of the Texas tribes to their side and offered to recognize their land claims by sending three commissioners to negotiate a treaty. A treaty was agreed upon in February 1836 which mapped the boundaries of Indian lands; but, this agreement was never officially ratified by the Texas government. The Delaware remained friendly after Texas won its independence. Republic of Texas President, Sam Houston favored a policy of peaceful relations with all tribes. He sought the services of the friendly Delaware and in 1837 enlisted several Delaware to protect the frontier from hostile western tribes. Delaware scouts joined with Texas Rangers as they patrolled the western frontier. Houston also tried to get the Delaware land claims recognized but his efforts were only met by opposition. The next Texan President, Mirabeau B. Lamar, completely opposed all Indians. He considered them as illegal intruders who threatened the settlers safety and lands and issued an order for their removal from Texas. The Delaware were sent north of the Red River into Indian Territory, however, a few scattered Delawares remained in Texas. In 1841, Houston was reelected to a second term as president and his peaceful Indian policy was then reinstated. A treaty with the remaining Delaware and a few other tribes was negotiated in 1843 at Fort Bird and the Delaware were enlisted to help him make peace with the Comanche. Delaware scouts and their families were allowed to settle along the Brazos and Bosque rivers in order to influence the Comanche to come to the Texas government for a peace conference. The plan was successful and the Delaware helped bring the Comanches to a treaty council in 1844. In 1845, the Republic of Texas agreed to annexation by the US to become an American State. The Delaware continued their peaceful policy with the Americans and served as interpreters, scouts and diplomats for the US Army and the Indian Bureau. In 1847, John Meusebach was assisted by Jim Shaw (Delaware), in settling the German communities in the Texas Hill Country. For the remainder of his life, Shaw worked as a military scout in West Texas. In 1848, John Conner (Delaware) guided the Chihuahua-El Paso Expedition and was granted a league of land by a special act of the Texas legislature in 1853. The expeditions of the map maker Randolph B. Marcy through West Texas in 1849, 1852, and 1854 were guided by Black Beaver (Delaware). In 1854, despite the history of peaceful relations, the last of the Texas Delaware were moved by the American government to the Brazos Indian Reservation near Graham, Texas. In 1859 the US forced the remaining Delaware to remove from Texas to a location on the Washita River in the vicinity of present Anadarko, Oklahoma. In 1979, the United States Bureau of Indian Affairs revoked the tribal status of the Delaware living among Cherokee in Oklahoma. They began to count the Delaware as Cherokee. The Delaware had this decision overturned in 1996, when they were recognized by the federal government as a separate tribal nation. The Cherokee Nation filed suit to overturn the independent federal recognition of the Delaware. The tribe lost federal recognition in a 2004 court ruling in favor of the Cherokee Nation, but regained it on 28 July 2009. After recognition, the tribe reorganized under the Oklahoma Indian Welfare Act. Members approved a constitution and by laws in a May 26, 2009 vote. Jerry Douglas was elected as tribal chief. In 2004, the Delaware Nation filed suit against Pennsylvania in the United States District Court for the Eastern District of Pennsylvania, seeking to reclaim 315acres included in the 1737 Walking Purchase to build a casino. In the suit titled "The Delaware Nation v. Commonwealth of Pennsylvania" the plaintiffs acting as the successor in interest and political continuation of the Lenni Lenape and of Lenape Chief “Moses” Tundy Tatamy, claimed aboriginal and fee title to the 315 acres of land located in Forks Township in Northampton County, near the town of Tatamy, Pennsylvania. After the Walking Purchase, Chief Tatamy was granted legal permission for him and his family to remain on this parcel of land, known as “Tatamy's Place". In addition to suing the state, the tribe also sued the township, the county and elected officials, including Gov. Ed Rendell. The court held that the justness of the extinguishment of aboriginal title is nonjusticiable, including in the case of fraud. Because the extinguishment occurred prior to the passage of the first Indian Nonintercourse Act in 1790, that Act did not avail the Delaware. As a result the court granted the Commonwealth's motion to dismiss. In its conclusion the court stated: ... we find that the Delaware Nation's aboriginal rights to Tatamy's Place were extinguished in 1737 and that, later, fee title to the land was granted to Chief Tatamy-not to the tribe as a collectivity. Three Lenape tribes are federally recognized in the United States. They are as follows: The Canadian Lenape left the United States in the late 1700s following the American Revolutionary War and settled in what is now Ontario. Consequently, Canada recognizes three Lenape First Nations (with four Indian reserves); they are located in Southwestern Ontario: New Jersey has two state recognized tribes, who are in part Lenape: the Nanticoke Lenni-Lenape Indians of New Jersey and Ramapough Lenape Nation. In Delaware, the Lenape are organized and state-recognized as the Lenape Indian Tribe of Delaware. Some Lenape or Delaware live in communities known as Urban Indians in their historic homeland in a number of states such as Delaware, Maryland, New Jersey, and Virginia. New York City and Philadelphia are known to have some Lenape residents. Some Lenape live within the city of Tulsa, Oklahoma, and others live in diaspora across the country. Large communities of Lenape people live in the vicinities of Bartlesville, Oklahoma and Anadarko, Oklahoma. Additionally, over a dozen unrecognized tribes claim Lenape descent. Unrecognized Lenape organizations in Colorado, Idaho, and Kansas have petitioned the United States federal government for recognition. The Walam Olum, which purported to be an account of the Delaware's migration to the lands around the Delaware River, emerged through the works of Constantine Samuel Rafinesque in the 19th century. For many decades, scholars believed it was genuine. In the 1980s and 1990s, newer textual analysis suggested it was a hoax. In Cormac McCarthy's Blood Meridian, the group of American scalphunters are aided by an unspecified number of Delaware, who serve as scouts and guides through the western deserts. In The Light in the Forest, True Son is adopted by a band of Lenape. In Mark Raymond Harrington's 1938 novel, The Indians of New Jersey: Dickon among the Lenapes, a group of Lenape find a shipwrecked English boy. His gradual integration into the tribe provides a study of Lenape life, society, weaponry, and beliefs. The book includes a glossary for Lenape terms. Trouble's Daughter: The Story of Susanna Hutchinson, Indian Captive is a young adult novel of a fictional kidnapping by the Lenape Turtle Clan of a daughter of Anne Hutchinson, the religious reformer and founder of the Rhode Island colony. Moon of Two Dark Horses is a novel of the friendship between a white settler and a Lenape boy at the time of the Revolutionary War. Standing in the Light, The Captive Diary of Catherine Carey Logan, part of the Dear America series of fictional diaries, is a novel by Mary Pope Osborne. It tells the story of the capture of a teenage girl and her brother by a band of Lenape, and the youths' assimilation into Lenape culture. Peter (Per) Lindeström's Geographia Americae with an Account of the Delaware Indians is one of the few sympathetic contemporary accounts of Lenape life in the lower Delaware River valley during the 17th century. Moravian missionary John Heckewelder published a sympathetic account of the Lenape in exile in the Ohio Valley. His account, published in 1818, provides some alternate Lenape tribal history disputing the tributary relationship with the Susquehannock. "Scouts of '76: a tale of the revolutionary war", a 1924 book by Charles E. Willis, contains an account of the contributions of the Lenni Lenape to the American Revolution when they lived in the area of Lake Wawayanda. The Ramapough Lenape Nation is central to the Sundance Channel series The Red Road, in a newly (Federally) recognized reservation straddling the border between New York and New Jersey.
http://everything.explained.today/Lenape/
4.03125
Current Access Through Your IP Address Current Access Through Your Registered Email Address A subscription to JoVE is required to view this article. You will only be able to see the first 20 seconds. Ligation can be defined as the act of joining, and in biology the term refers to an enzymatic reaction that joins two biomolecules with a covalent bond. This video describes the application of DNA ligation in molecular biology research. In the cell, DNA ligases are enzymes that identify and seal breaks in DNA by catalyzing the formation of phosphodiester bonds between the 3’-hydroxyl and 5’-phosphate groups of the DNA backbone. Ligation occurs as part of normal cellular processes, such as DNA replication, to repair single and double strand DNA breaks. In the laboratory, DNA ligases is routinely used in molecular cloning - a process that joins endonuclease-digested DNA fragments, or inserts, with an endonuclease-digested vector, such as a plasmid, so that the fragment can be introduced into host cells and then replicated. Endonuclease digestions involve the use of restriction endonucleases, or restriction enzymes, which create nicks at specific stretches of DNA. These nicks can resemble single strand breaks producing 3’ and 5’ overhangs, called sticky ends or double strand breaks with no overhangs, called blunt-ends. Ligating sticky ends is advantageous, because the complimentary overhanging base pairs stabilize the reaction. Because blunt end ligations don’t have any complimentary base pairing, the ligation is less efficient and more difficult for the enzyme to join the ends. Sticky and blunt ends cannot, under normal circumstances, be ligated together. However, the Klenow fragment, the product of DNA polymerase 1, digested with subtilisin can convert sticky ends to blunt ends. Klenow possesses 3’ to 5’ exonuclease activity that chews up 3’ overhangs and polymerase activity that blunts 5’overhangs by extending the 3’ end of the complementary strand. When the goal is to insert a gene into a plasmid, resealing of vector DNA, called self-ligation, is a common undesirable outcome for a ligation reaction. Alkaline phosphatase treatment of vector DNA post-digestion removes 5’phosphates on both ends and prevents this undesirable outcome. As we mentioned previously, vector and insert DNAs are digested with endonucleases prior to beginning a ligation. Following gel-purification of digested vector and insert, DNA concentrations are measured a spectrophotometer to determine the concentration of the purified vector and insert. From this concentration, the number of molecules of insert or vector in 1 µl can be determined based on the average molecular weight for a DNA base pair and the number of base pairs in each fragment. Based on the calculated molecular concentration of vector and insert, a 3 to 1 ratio of insert to vector is calculated, to determine the volume of vector and insert used in the reaction. This 3 to 1 ratio of DNA insert to vector is desirable, because it ups the probability of the insert being ligated into vector versus vector ligating itself. Now that we have determined the amount of vector and insert DNA to use in the reaction, we proceed to set up the ligation reaction on ice. The order of adding in which reaction components should be added to your microfuge tube is as follows: sterile water enough to a make a 10 µl final volume, in our case we’ll use 4 µl, 1 µl 10X of ligation buffer, 1 µl 10mM of ATP, 1 µl of vector and 3 µl insert DNA, as calculated, and finally 1 µl DNA Ligase. The reaction is mixed thoroughly, centrifuged and incubated at the appropriate temperature. Whether you are doing a sticky or blunt end ligation impacts the temperature and duration of the ligation reaction. For example, a sticky end ligation with a six base pair overhang can be carried out near room temperature for about 1 hr, because the complementary ends stabilize the joining of fragments. Short overhangs or blunt end ligations should be carried out between 14-20˚C overnight. Now that we learned how to set up a ligation reaction, let’s have a look at some of the applications of this procedure. Ligations can be used to directly insert PCR-amplified fragments into linearized plasmids. Here you see a researcher taking a sample of frozen mouse brain, isolating genomic DNA from it, and then subjecting it to bisulfite PCR, which is a PCR-based method to detect methylated DNA. PCR products are then directly ligated into the plasmid to create a library of genes that are methylated in that particular brain region. Ligations can be used to attach oligonucleotide linkers, which contain binding sites for PCR primers, to purify DNA fragments. When working with tumor samples, scientists can use this approach to sequence tumor genomic DNA, with the hope of identifying tumor-causing mutations. In this video, ligation is performed on DNA isolated from formaldehyde fixed cells and subsequently treated with a restriction enzyme and klenow in presence of biotin, which is then used to pull down ligated DNA. This DNA is then amplified using PCR and the products sequenced to identify chromatin interactions at various scales as shown. You have now learned about DNA ligase, various principles involved in setting up ligation in the laboratory, potential problems and fixes and various applications of ligation in molecular biology research. Thanks for watching. In molecular biology, ligation refers to the joining of two DNA fragments through the formation of a phosphodiester bond. An enzyme known as a ligase catalyzes the ligation reaction. In the cell, ligases repair single and double strand breaks that occur during DNA replication. In the laboratory, DNA ligase is used during molecular cloning to join DNA fragments of inserts with vectors – carrier DNA molecules that will replicate target fragments in host organisms. This video provides an introduction to DNA ligation. The basic principle of ligation is described as well as a step-by-step procedure for setting up a generalized ligation reaction. Critical aspects of ligation reactions are discussed, such as how the length of a sticky end overhang affects the reaction temperature and how the ratio of DNA insert to vector should be tailored to prevent self-ligation. Molecular tools that assist with ligations like the Klenow Fragment and shrimp alkaline phosphatase (SAP) are mentioned, and applications , such as proximity ligations and the addition of linkers to fragments for sequencing are also presented. JoVE Science Education Database. Basic Methods in Cellular and Molecular Biology. DNA Ligation Reactions. JoVE, Cambridge, MA, doi: 10.3791/5069 (2016). In this video, PCR is used to amplify regions in bacterial genomic DNA called clustered regularly interspaced short palindromic repeat (CRISPR) sequences. Ligations are used to introduce endonuclease-digested and gel-purified fragments into a vector, which is then transformed into E. coli. CRISPR sequences are of interest to scientists, because they are important components of bacterial defense against viral infections. The telomere is a repetitive nucleotide sequence at the end of a chromosome, which protects an organism’s genetic material from degradation. In this video, the ligation of adaptors containing PCR primers is used to determine the G-overhang structure of the telomeres found in Trypanosoma brucei - the causative agent of African sleeping sickness. A BioBrick part is a DNA sequence of defined structure and function that has standardized upstream and downstream sequences. Ligation is used in this video to introduce Biobrick parts into a plasmid that enables E. coli to metabolize hydrocarbons. This proof of concept study shows the possibility of a sustainable approach to oil-remediation through synthetic biology. In this article, linearized ?-phage DNA is modified through the annealing and ligation of modified oligonucleotides to form a replication fork. The ligation of biotin and digoxygenin labeled probes on opposite ends of the ?-phage replication fork allows for the real time observation of DNA replication through microscopy. Ligation is used to add biotin and digoxigenin to ? DNA. The conjugated DNA is adhered to a flow cell and magnetic beads were attached for uses with magnetic tweezers. This procedure allows for the measurement of forces exerted by individual proteins.
http://www.jove.com/science-education/5069/dna-ligation-reactions
4.125
Grading on a curve |This article needs additional citations for verification. (October 2014)| |Some of this article's listed sources may not be reliable. (October 2014)| ||The neutrality of this article is disputed. (September 2015)| In education, grading on a curve (also referred to as curved grading, bell curving, or using grading curves) is a statistical method of assigning grades designed to yield a pre-determined distribution of grades among the students in a class. The term "curve" refers to the bell curve, the graphical representation of the probability density of the normal distribution (also called the Gaussian distribution), but this method does not necessarily use any specific frequency distribution. One method of applying a curve uses three steps: - Numeric scores (or possibly scores on a sufficiently fine-grained ordinal scale) are assigned to the students. The absolute values are less relevant, provided that the order of the scores corresponds to the relative performance of each student within the course. - These scores are converted to percentiles (or some other system of quantiles). - The percentile values are transformed to grades according to a division of the percentile scale into intervals, where the interval width of each grade indicates the desired relative frequency for that grade. For example, if there are five grades in a particular university course, A, B, C, D, and F, where A is reserved for the top 20% of students, B for the next 30%, C for the next 30%-40%, and D or F for the remaining 10%-20%, then scores in the percentile interval from 0% to 20% will receive a grade of D or F, scores from 21% to 50% will receive a grade of C, scores from 51% to 80% receive a grade of B, and scores from 81% to 100% will achieve a grade of A. Consistent with the example illustrated above, a grading curve allows academic institutions to ensure the distribution of students across certain grade point average (GPA) thresholds. As many professors establish the curve to target a course average of a C, the corresponding grade point average equivalent would be a 2.0 on a standard 4.0 scale employed at most North American universities. Similarly, a grade point average of 3.0 on a 4.0 scale would indicate that the student is within the top 20% of the class. Grading curves serve to attach additional significance to these figures, and the specific distribution employed may vary between academic institutions. The ultimate objective of grading curves is to minimize or eliminate the influence of variation between different instructors of the same course, ensuring that the students in any given class are assessed relative to their peers. This also circumvents problems associated with utilizing multiple versions of a particular examination, a method often employed where test administration dates vary between class sections. Regardless of any difference in the level of difficulty, real or perceived, the grading curve ensures a balanced distribution of academic results.
https://en.wikipedia.org/wiki/Grading_curve
4.125
Mohandas Karamchand Gandhi (1869-1948) was the most important Indian political and spiritual leader of the 20th century. Gandhi's influence was so great that his methods were later adopted by many political activists around the world, including American civil rights leaders, such as Martin Luther King Jr. Gandhi was born into a middle-class Hindu family, in the city of Porbander, a small town on the western coast of India. At the age of 13, Gandhi entered into an arranged marriage with a 10-year-old girl named Kasturba. (They were to remain married their entire lives.) In 1888, at the age of 19, Gandhi traveled to England to study law. After three years, he became a lawyer and returned to India, and after a year of practicing law unsuccessfully, he was offered a job by an Indian businessman with interests in South Africa. In 1892, at the age of 23, Gandhi traveled to South Africa, where he was to remain for over 20 years. At the time, the Indians in South Africa, mostly Hindus, had no legal rights. The European colonialists did not consider Hindus to be full human beings and referred to them as "coolies". Gandhi became a leader of the Indian community and, over the years, developed a political movement based on the methods of non-violent civil disobedience, which he called "satyagraha". Around 1905, Gandhi gave up Western ways and, for the rest of his life, followed the traditional Hindu precepts of austerity and self-denial. He dressed simply, in a loin cloth and shawl, and had no other material possessions. In 1915, at the age of 46, Gandhi returned to India, where he spent a year traveling widely and then the next few years, helping to settle many local disputes. His success lead to him being admired throughout the country, so much so that India's most well-known writer, Rabindranath Tagore, gave Gandhi the title Mahatma ("Great Soul"). Gandhi himself, however, repudiated the honor, even though, within the Hindu culture, being called "Mahatma" is a symbol of enormous respect. At the time Gandhi was born, India was a heterogeneous region, a British colony consisting of more than 500 different "native states", that is, kingdoms and principalities. (Gandhi himself was born in the state of Kathiawar.) The native states were allowed a certain degree of local autonomy, but the country as a whole was controlled by strict British authority. Soon after his return to India, Gandhi dedicated himself to the goal of Indian independence. From 1920-1922, he led a "non-cooperation movement", in which he called upon Indians to stop cooperating with the British, to become self-reliant, and to withdraw from British organizations. In 1922, the British authorities imprisoned Gandhi on charges of sedition (that is, inciting rebellion). In 1925, Gandhi was released due to ill health but, over his lifetime, he was to be imprisoned many times. Gandhi became a social reformer, working tirelessly to enhance Hindu-Muslim relations, as he slowly led his country into independence. Over the years, he founded many newspapers, which he used to further his ideals. (A little known fact is that Gandhi is one of the principal figures in the history of Indian journalism.) Gandhi developed satyagraha into a national movement, stressing passive resistance, nonviolent disobedience, boycotts and, on occasion, hunger strikes. He became so well-known and respected, that he gained influence with both the general public and the British rulers. For example, in 1939, by a combination of fasting and satyagraha, Gandhi was able to compel several states, that were ruled by princes, to grant democratic reforms. Not only could he unify the many diverse elements of the Indian National Congress, he was able to force political concessions from the British by threatening to fast until death. After World War II, Gandhi was involved in the deliberations that led to India's independence. The same deliberations, however, also led to partition of India into two countries: modern-day India (primarily for Hindus) and Pakistan (for Muslims). Gandhi strongly opposed this partition, which ultimately resulted in the death of about 1 million people and the dislocation of over 11 million people. Although Gandhi was a man of faith, he did not found a church, nor did he create any specific dogma for his followers. On January 30, 1948, just after India attained its independence from Britain, Gandhi was assassinated at the age of 78. The killer was a Hindu fanatic working as part of a conspiracy that blamed Gandhi for the partition of the country. Although Gandhi was a man of faith, he did not found a church, nor did he create any specific dogma for his followers. Gandhi believed in the unity of all mankind under one god, and preached Hindu, Muslim and Christian ethics. As a youth, he was neither a genius nor a child prodigy. Indeed, he suffered from extreme shyness. However, he approached life as a very long series of small steps towards his goals, which he pursued relentlessly. By the time he died, India had become an independent country, free of British rule, in fact, the largest democracy in the world, mostly Hindu with a sizable Muslim minority. Today, Gandhi is remembered not only as a political leader, but as a moralist who appealed to the universal conscience of mankind. As such, he changed the world. © All contents Copyright 2016, Harley Hahn
http://www.harley.com/people/mohandas-gandhi.html
4
Skip to main content Get your brand new Wikispaces Classroom now and do "back to school" in style. Pages and Files Feedback on TRAILS Getting Started With TRAILS Helpful Tips When Using TRAILS Resources for Educators TRAILS Scholarly Publication TRAILS- Free Information Literacy Assessments Promote Your Page Too Please add any and all lesson plans here. Lesson Plans from Our Previous Wiki Develop, Use, and Revise Search Strategies Identify Potential Sources Great Lessons on Evaluating Sources from Our Previous Wiki Evaluating Web Sites Assignment via Pensacola Catholic High School Website Evaluation Form The Ten Finger Method to Evaluate Websites MANY activities, lessons, and tips for evaluating internet resources - includes misleading that can be used as a classroom activity. Give some students reliable sources and give others unreliable/false once. Have them do research and present to the class their findings on their source. Do not tell them that some are not reliable. This highlights the further need for the evaluation of internet sources. Lessons and Resources for Evaluating Website Content - includes additional hoax Mrs. V's Lessons Adventures of Cyber Bee Grades 9-12 Evaluation Activities Grades 6-8 Identifying Good Sites Grades 4-5 Rating Websites Grades 2-3 Finding Good Sites Grades K-1 What's a Good Site? Kathy Schrock's Guide to Website Evaluation - Very good including step by step instructions. Evaluating Internet Research Sources - Practical ways to look at evaluating resources. Evaluation Wizard for Websites Using Information Responsibly: Lessons for Teaching Students How to Use Information Responsibly Plagiarism Lessons for Grades 9-11 The Purdue Online Writing Lab Cyber Bee - Citing Electronic Resources Resource Citation Including Email, ListServs, and Instant Messenger MANY Citation Resources Including Elementary, Middle, and High School Students help on how to format text Turn off "Getting Started"
http://trails-informationliteracy.wikispaces.com/Lesson+Plans?responseToken=0fcb3e5cb2a98a3d1fd3d0663b4fc3b27
4.15625
|This article is part of a series on the politics and government of |Titles and honours| |Precedent and law| The Roman Constitution was an uncodified set of guidelines and principles passed down mainly through precedent. The Roman constitution was not formal or even official, largely unwritten and constantly evolving. Having those characteristics, it was therefore more like the British common law system than a statutory law system like the written United States Constitution, even though the constitution's evolution through the years was often directed by passage of new laws and repeal of older ones.. Concepts that originated in the Roman constitution live on in both forms of government to this day. Examples include checks and balances, the separation of powers, vetoes, filibusters, quorum requirements, term limits, impeachments, the powers of the purse, and regularly scheduled elections. Even some lesser used modern constitutional concepts, such as the bloc voting found in the electoral college of the United States, originate from ideas found in the Roman constitution. Over the years, the Roman constitution continuously evolved. By 573 BC, the Constitution of the Roman Kingdom had given way to the Constitution of the Roman Republic. By 27 BC, the Constitution of the Roman Republic had given way to the Constitution of the Roman Empire. By 300 AD, the Constitution of the Roman Empire had given way to the Constitution of the Late Roman Empire. The actual changes, however, were quite gradual. Together, these four constitutions formed four epochs in the continuous evolution of one master constitution. The Roman senate was the most permanent of all of Rome's political institutions. It was probably founded before the first king of Rome ascended the throne. It survived the fall of the Roman Kingdom in 510 BC, the fall of the Roman Republic in 27 BC, and the fall of the Roman Empire in 476 AD. It was, in contrast to many modern institutions named 'Senate', not a legislative body. The power of the senate waxed and waned throughout its history. During the days of the kingdom, it was little more than an advisory council to the king. The last king of Rome, the tyrant Lucius Tarquinius Superbus, was overthrown following a coup d'état that was planned in the senate. During the early republic, the senate was politically weak. During these early years, the executive magistrates were quite powerful. The transition from monarchy to constitutional rule was probably more gradual than the legends suggest. Thus, it took a prolonged weakening of these executive magistrates before the senate was able to assert its authority over those magistrates. By the middle republic, the senate reached the apex of its republican power. This occurred because of the convergence of two factors. The plebeians had recently achieved full political enfranchisement. Therefore, they were not as aggressive as they had been during the early republic in pushing for radical reforms. In addition, the period was marked by prolonged warfare against foreign enemies. The result was that both the popular assemblies and the executive magistrates deferred to the collective wisdom of the senate. The late republic saw a decline in the senate's power. This decline began following the reforms of the radical tribunes Tiberius and Gaius Gracchus. The declining influence of the senate during this era, in large part, was caused by the class struggles that had dominated the early republic. The end result was the overthrow of the republic, and the creation of the Roman Empire. The senate of the very early Roman Empire was as weak as it had been during the late republic. However, after the transition from republic to empire was complete, the senate arguably held more power than it had held at any previous point. All constitutional powers (legislative, executive and judicial) had been transferred to the senate. However, unlike the senate of the republic, the senate of the empire was dominated by the emperor. It was through the senate that the emperor exercised his autocratic powers. By the late principate, the senate's power had declined into near-irrelevance. It never again regained the power that it had held before that point. Much of the surviving literature from the imperial period was written by senators. To a large degree, this demonstrates the strong cultural influence of the senate, even during the late empire. The institution survived the fall of the Empire in the West, and even enjoyed a modest revival as imperial power was reduced to a government of Italy only. The senatorial class was severely affected by the Gothic wars. The first Roman assembly, the Comitia Curiata, was founded during the early kingdom. Its only political role was to elect new kings. Sometimes, the king would submit his decrees to it for ratification. During the early republic, the Comitia Curiata was the only legislative assembly with any power. Shortly after the founding of the republic, however, the Comitia Centuriata and the Comitia Tributa became the predominant legislative assemblies. Most modern legislative assemblies are bodies consisting of elected representatives. Their members typically propose and debate bills. These modern assemblies use a form of representative democracy. In contrast, the assemblies of the Roman Republic used a form of direct democracy. The Roman assemblies were bodies of ordinary citizens, rather than elected representatives. In this regard, bills voted on (called plebiscites) were similar to modern popular referenda. Unlike many modern assemblies, Roman assemblies were not bicameral. That is to say that bills did not have to pass both major assemblies in order to be enacted into law. In addition, no other branch had to ratify a bill (rogatio) in order for it to become law (lex). Members also had no authority to introduce bills for consideration; only executive magistrates could introduce new bills. This arrangement is also similar to what is found in many modern countries. Usually, ordinary citizens cannot propose new laws for their enactment by a popular election. Unlike many modern assemblies, the Roman assemblies also had judicial functions. After the founding of the empire, the powers of the assemblies were transferred to the senate. When the senate elected magistrates, the results of those elections would be read to the assemblies. Occasionally, the emperor would submit laws to the Comitia Tributa for ratification. The assemblies ratified laws up until the reign of the emperor Domitian. After this point, the assemblies simply served as vehicles through which citizens would organize. During the years of the Roman Kingdom, the king (rex) was the only executive magistrate with any power. He was assisted by two quaestors, whom he appointed. He would often appoint other assistants for other tasks. When he died, an interrex would preside over the senate and assemblies, until a new king was elected. Under the Constitution of the Roman Republic, the "executive branch" was composed of both ordinary as well as extraordinary magistrates. Each ordinary magistrate would be elected by one of the two major Legislative Assemblies of the Roman Republic. The principal extraordinary magistrate, the dictator, would be appointed upon authorization by the Senate of the Roman Republic. Most magistrates were elected annually for a term of one year. The terms for all annual offices would begin on New Year's Day, and end on the last day of December. The two highest ranking ordinary magistrates, the consuls and praetors, held a type of authority called imperium (Latin for "command"). Imperium allowed a magistrate to command a military force. Consuls held a higher grade of imperium than praetors. Consuls and praetors, as well as censors and curule aediles, were regarded as "curule magistrates". They would sit on a curule chair, which was a symbol of state power. Consuls and praetors where attended by bodyguards called lictors. The lictors would carry fasces. The fasces, which consisted of a rod with an embedded axe, were symbols of the coercive power of the state. Quaestors were not curule magistrates, and had little real power. Plebeian tribunes were not officially "magistrates", since they were elected only by the plebeians. Since they were considered to be the embodiment of the People of Rome, their office and their person were considered sacrosanct. It was considered to be a capital offense to harm a tribune, to attempt to harm a tribune, or to attempt to obstruct a tribune in any way. All other powers of the tribunate derived from this sacrosanctity. The tribunes were assisted by plebeian aediles. In an emergency, a dictator would be appointed. A newly appointed dictator would usually select a deputy, known as the "Magister Equitum" ("Master of the Horse"). Both the dictator and his master of the horse were extraordinary magistrates, and they both held imperium. In practice, the dictator functioned as a consul without any constitutional checks on his power. After 202 BC, the dictatorship fell into disuse. During emergencies, the senate would pass the senatus consultum ultimum ("ultimate decree of the senate"). This suspended civil government, and declared (something analogous to) martial law. It would declare "videant consules ne res publica detrimenti capiat" ("let the consuls see to it that the state suffer no harm"). In effect, the consuls would be vested with dictatorial powers. After the fall of the republic, the old magistracies (dictators, consuls, praetors, censors, aediles, quaestors and tribunes) were either outright abandoned, or simply lost all powers. The emperor became the master of the state. The founding of the empire was tantamount to a restoration of the old monarchy. The chief executive became the unchallenged power in the state, the senate became a powerless advisory council, and the assemblies became irrelevant. The legacy of the Roman constitution The Roman constitution was one of the few constitutions to exist before the 18th century. None of the others are as well known to us today. And none of the others governed such a vast empire for so long. Therefore, the Roman constitution was used as a template, often the only one, when the first constitutions of the modern era were being drafted. And because of this, many modern constitutions share a similar, even identical, superstructure (such as a separation of powers and checks and balances) as did the Roman constitution. - Abbott, Frank Frost (1901). A History and Description of Roman Political Institutions. Elibron Classics (ISBN 0-543-92749-0). - Byrd, Robert (1995). The Senate of the Roman Republic. U.S. Government Printing Office, Senate Document 103-23. - Cicero, Marcus Tullius (1841). The Political Works of Marcus Tullius Cicero: Comprising his Treatise on the Commonwealth; and his Treatise on the Laws. Translated from the original, with Dissertations and Notes in Two Volumes. By Francis Barham, Esq. London: Edmund Spettigue. Vol. 1. - Lintott, Andrew (1999). The Constitution of the Roman Republic. Oxford University Press (ISBN 0-19-926108-3). - Polybius (1823). The General History of Polybius: Translated from the Greek. By James Hampton. Oxford: Printed by W. Baxter. Fifth Edition, Vol 2. - Taylor, Lily Ross (1966). Roman Voting Assemblies: From the Hannibalic War to the Dictatorship of Caesar. The University of Michigan Press (ISBN 0-472-08125-X). - Byrd, 161 - Ihne, Wilhelm. Researches Into the History of the Roman Constitution. William Pickering. 1853. - Johnston, Harold Whetstone. Orations and Letters of Cicero: With Historical Introduction, An Outline of the Roman Constitution, Notes, Vocabulary and Index. Scott, Foresman and Company. 1891. - Mommsen, Theodor. Roman Constitutional Law. 1871-1888 - Tighe, Ambrose. The Development of the Roman Constitution. D. Apple & Co. 1886. - Von Fritz, Kurt. The Theory of the Mixed Constitution in Antiquity. Columbia University Press, New York. 1975. - The Histories by Polybius - Cambridge Ancient History, Volumes 9–13. - A. Cameron, The Later Roman Empire, (Fontana Press, 1993). - M. Crawford, The Roman Republic, (Fontana Press, 1978). - E. S. Gruen, "The Last Generation of the Roman Republic" (U California Press, 1974) - F. Millar, The Emperor in the Roman World, (Duckworth, 1977, 1992). - A. Lintott, "The Constitution of the Roman Republic" (Oxford University Press, 1999) - Cicero's De Re Publica, Book Two - Rome at the End of the Punic Wars: An Analysis of the Roman Government; by Polybius Secondary source material - Considerations on the Causes of the Greatness of the Romans and their Decline, by Montesquieu - The Roman Constitution to the Time of Cicero - What a Terrorist Incident in Ancient Rome Can Teach Us
https://en.wikipedia.org/wiki/Roman_Constitution
4.46875
Beacon Lesson Plan Library Leon County Schools After reading the novel FREAK THE MIGHTY students will be able to describe and illustrate the setting of the novel, explain character development through production of a graphic organizer, and identify the elements of the plot. The student describes or illustrates the setting in a literary text. The student explains character development in a literary text. The student creates a graphic organizer that represents the complex elements of a plot in a literary text. -FREAK THE MIGHTY (Rodman Philbrick,Scholastic, 2001) -Legal size white paper -One copy of novel for each student -One copy of the rubric for each student (SEE ASSOCIATED FILES) 1. Order enough copies of FREAK THE MIGHTY for each student (books can be ordered from Amazon.com or Scholastic) 2. Copy One-Pager form onto transparency. 3. Set up overhead projector 4. Assemble necessary materials (paper, markers, crayons, rulers) 5. Copy assessment rubric for distribution to students (see associated file) Students should have read the novel FREAK THE MIGHTY before initiating this lesson. 1. Review story elements through a class discussion. Discussion points include: plot, setting, and character development. Discuss with the students examples of these elements based on the novel FREAK THE MIGHTY. Possible questions could be as follows: PLOT: What is plot? What are some examples of the plot from FREAK THE MIGHTY? What is the main conflict? How is the conflict resolved? SETTING: What is setting? Where does the story take place? How do we know? What are the clues in the novel that help us determine the setting of the story? If you had to illustrate the setting, what do you think it would look like? CHARACTER DEVELOPMENT: Who are the main characters? What do we know about our main characters? How do we know? What are the experiences they go through? What do they look like? How do the different characters deal with the conflicts in the novel? 2. Address any questions or concerns that the students may have. Discuss the following: Re-define the term conflict. Re-define the term resolution. How would the novel be different if the characters personalities were swapped? What do you think would happen next if you could write the next chapter? 3. Through a class discussion review the ideas of a road map, flow chart, and sequencing chart. Students work individually to develop graphic organizers to help them map out the plot of the story. Students work on scrap paper to develop ideas. Let students be as creative as possible. Periodically point out exemplary examples/ideas of graphic organizers as you observe their work. 4. Allow students to share their ideas with their classmates in either pairs or groups of four to five. Groups report back to class. 5. Review the elements discussed today. Tell the students tomorrow they will be working on demonstrating their knowledge of these elements through designing their own One Pagers (show example on overhead projector). Days 2 & 3 1. Review the elements of plot, setting, and character development. Address any questions that the students may have. 2. Introduce the One Pager idea to the students (transparency / see associated file). 3. Pass out the scoring rubric to students (see associated file). 4. Instruct students on what is expected of them. They are to complete the One Pager using the transparency and the rubric as a guide. 5. Pass out legal size paper and supplies to students. 6. Instruct students that they will have two days to complete this activity. 7. When students finish this activity, have them turn in their One Pagers with their scoring rubric (see associated file). 8. Score students using the rubric and return the One Pagers to students. Assessment of studentsí One Pagers will be based on the rubric attached in Associated Files. Teachers' guide to FREAK THE MIGHTY including discussion questions. This site also contains links to the author and the movie “The Mighty” produced by Miramax, 1999.Teachers' guide to FREAK THE MIGHTY
http://www.beaconlearningcenter.com/lessons/lesson.asp?ID=2238
4.0625
|This article needs additional citations for verification. (January 2016)| A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group). These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems. Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is known in mathematical terms as Poincare group, the symmetry group of special relativity. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity. - 1 Symmetry as invariance - 2 Local and global symmetries - 3 Continuous symmetries - 4 Discrete symmetries - 5 Mathematics of physical symmetry - 6 Mathematics - 7 See also - 8 References - 9 External links Symmetry as invariance Invariance is specified mathematically by transformations that leave some quantity unchanged. This idea can apply to basic real-world observations. For example, temperature may be constant throughout a room. Since the temperature is independent of position within the room, the temperature is invariant under a shift in the measurer's position. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere "looks". Invariance in force The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well. For example, an electric field due to a wire is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the electrically charged wire of infinite length will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. Suppose some configuration of charges (may be non-stationary) produce an electric field in some direction, then rotating the configuration of the charges (without disturbing the internal dynamics that produces the particular field) will lead to a net rotation of the direction of the electric field. These two properties are interconnected through the more general property that rotating any system of charges causes a corresponding rotation of the electric field. In Newton's theory of mechanics, given two bodies, each with mass m, starting from rest at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is 1⁄2m(v12 + v22) and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis. The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged. Local and global symmetries Symmetries may be broadly classified as global or local. A global symmetry is one that holds at all points of spacetime, whereas a local symmetry is one that has a different symmetry transformation at different points of spacetime; specifically a local symmetry transformation is parameterised by the spacetime co-ordinates. Local symmetries play an important role in physics as they form the basis for gauge theories. The two examples of rotational symmetry described above - spherical and cylindrical - are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by continuous or smooth functions. An important subclass of continuous symmetries in physics are spacetime symmetries. Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time. - Time translation: A physical system may have the same features over a certain interval of time ; this is expressed mathematically as invariance under the transformation for any real numbers t and a in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy when suspended from a height above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time (in seconds) and also at , say, the particle's total gravitational potential energy will be preserved. - Spatial translation: These spatial symmetries are represented by transformations of the form and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room. - Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry. - Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance. - Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity. - Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant. Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system. Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries. A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges. - Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, . For example, Newton's second law of motion still holds if, in the equation , is replaced by . This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height. - Spatial inversion: These are represented by transformations of the form and indicate an invariance property of a system when the coordinates are 'inverted'. Said another way, these are symmetries between a certain object and its mirror image. - Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries. C, P, and T symmetries - Every particle is replaced with its antiparticle. This is C-symmetry (charge symmetry); - Everything appears as if reflected in a mirror. This is P-symmetry (parity symmetry); T-symmetry is counterintuitive (surely the future and the past are not symmetrical) but explained by the fact that the Standard model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the big bang and the resulting low-entropy state in the "future." Since we perceive the "past" ("future") as having lower (higher) entropy than the present (see perception of time), the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past. These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. In the 4 dimensional matrix description of P,T is through a diagonal matrix, the negative identity, as well as C. Hence CPT is the identity operator. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics. ||This section may contain misleading parts. (June 2015)| A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the standard model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the standard model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. If superpartners exist they must have masses greater than current particle accelerators can generate. Mathematics of physical symmetry Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group . (The 3 refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is . Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group). Discrete symmetries are described by discrete groups. For example, the symmetries of an equilateral triangle are described by the symmetric group . An important type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.) Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology). Conservation laws and symmetry The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, the isometry of space gives rise to conservation of (linear) momentum, and isometry of time gives rise to conservation of energy. The following table summarizes some fundamental symmetries and the associated conserved quantity. |translation in time |translation in space |rotation in space |Discrete symmetry||P, coordinate inversion||spatial parity| |C, charge conjugation||charge parity| |T, time reversal||time parity| |CPT||product of parities| |Internal symmetry (independent of |U(1) gauge transformation||electric charge| |U(1) gauge transformation||lepton generation number| |U(1) gauge transformation||hypercharge| |U(1)Y gauge transformation||weak hypercharge| |U(2) [ U(1) × SU(2) ]||electroweak force| |SU(2) gauge transformation||isospin| |SU(2)L gauge transformation||weak isospin| |P × SU(2)||G-parity| |SU(3) "winding number"||baryon number| |SU(3) gauge transformation||quark color| |SU(3) (approximate)||quark flavor| |S(U(2) × U(3)) [ U(1) × SU(2) × SU(3) ] Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations are equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra. for a general field, . Without gravity only the Poincaré symmetries are preserved which restricts to be of the form: where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example local gauge transformations apply to both a vector and spinor field: where are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types. Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind: If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form: with D generating scale transformations and K generating special conformal transformations. For example N=4 super-Yang-Mills theory has this symmetry while General Relativity doesn't although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models. In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields. - Conservation law - Conserved current - Covariance and contravariance - Fictitious force - Galilean invariance - Gauge theory - General covariance - Harmonic coordinate condition - Inertial frame of reference - Lie group - List of mathematical topics in relativity - Lorentz covariance - Noether's theorem - Poincaré group - Special relativity - Spontaneous symmetry breaking - Standard model - Standard model (mathematical formulation) - Symmetry breaking - Wheeler–Feynman Time-Symmetric Theory - G. Kalmbach H.E.: Quantum Mathematics: WIGRIS. RGN Publications, Delhi, 2014. - Leon Lederman and Christopher T. Hill (2005) Symmetry and the Beautiful Universe. Amherst NY: Prometheus Books. - Schumm, Bruce (2004) Deep Down Things. Johns Hopkins Univ. Press. - Victor J. Stenger (2000) Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpt. 12 is a gentle introduction to symmetry, invariance, and conservation laws. - Anthony Zee (2007) Fearful Symmetry: The search for beauty in modern physics, 2nd ed. Princeton University Press. ISBN 978-0-691-00946-9. 1986 1st ed. published by Macmillan. - Brading, K., and Castellani, E., eds. (2003) Symmetries in Physics: Philosophical Reflections. Cambridge Univ. Press. - -------- (2007) "Symmetries and Invariances in Classical Physics" in Butterfield, J., and John Earman, eds., Philosophy of Physic Part B. North Holland: 1331-68. - Debs, T. and Redhead, M. (2007) Objectivity, Invariance, and Convention: Symmetry in Physical Science. Harvard Univ. Press. - John Earman (2002) "Laws, Symmetry, and Symmetry Breaking: Invariance, Conservations Principles, and Objectivity." Address to the 2002 meeting of the Philosophy of Science Association. - G. Kalmbach H.E.: Quantum Mathematics: WIGRIS. RGN Publications, Delhi, 2014 - Mainzer, K. (1996) Symmetries of nature. Berlin: De Gruyter. - Mouchet, A. "Reflections on the four facets of symmetry: how physics exemplifies rational thinking". European Physical Journal H 38 (2013) 661 hal.archives-ouvertes.fr:hal-00637572 - Thompson, William J. (1994) Angular Momentum: An Illustrated Guide to Rotational Symmetries for Physical Systems. Wiley. ISBN 0-471-55264-X. - Bas Van Fraassen (1989) Laws and symmetry. Oxford Univ. Press. - Eugene Wigner (1967) Symmetries and Reflections. Indiana Univ. Press.
https://en.wikipedia.org/wiki/Symmetry_in_physics
4.125
Lesson 16: Writing to a text file In the previous lesson, we learned to read from a text file. In this lesson, we will learn to write to a text file. The two methods are very similar, but there is one very important difference: You must have write permissions to the file. This means that the file will have to be located in a folder where you have the necessary permissions. If you work locally on your own computer, you can set the permissions yourself: right-click on the folder and choose "Properties". With most web hosts, you will normally have one folder with write permissions. It's often called something like "cgi-bin", "log", "databases" or something similar. If your web host permits it, you might also be able to set permissions yourself. Usually you can simply right-click on a folder in your FTP client and choose "properties" or "permissions" or something similar. The screendumps below shows how it's done in FileZilla. Read more on your web host's support pages. Note that it is the text file that needs to be in the folder with write permissions - not the PHP file. Open the text file for writing In the same way as when reading from a text file, the fopen function is used for writing, but this time we set the mode to "w" (writing) or "a" (appending). The difference between writing and appending is where the 'cursor' is located - either at the beginning or at the end of the text file. The examples in this lesson use an empty text file called textfile.txt. But you can also create your own text file if you like. First, let us try to open the text file for writing: <?php // Open the text file $f = fopen("textfile.txt", "w"); // Close the text file fclose($f); ?> Example 1: Write a line to the text file To write a line, we must use the function fwrite, like this: <html> <head> <title>Writing to a text file</title> </head> <body> <?php // Open the text file $f = fopen("textfile.txt", "w"); // Write text line fwrite($f, "PHP is fun!"); // Close the text file fclose($f); // Open file for reading, and read the line $f = fopen("textfile.txt", "r"); echo fgets($f); fclose($f); ?> </body> </html> Since we opened the file for writing, the line is added at the top, and thus overwrites the existing line. If we instead open the file appending, the line is added at the bottom of the text file, which then will increase by one line each time it's written to. Example 2: Adding a text block to a text file Of course, it is also possible to add an entire text block, instead of just a single line, like this: <html> <head> <title>Write to a text file</title> </head> <body> <h1>Adding a text block to a text file:</h1> <form action="myfile.php" method='post'> <textarea name='textblock'></textarea> <input type='submit' value='Add text'> </form> <?php // Open the text file $f = fopen("textfile.txt", "w"); // Write text fwrite($f, $_POST["textblock"]); // Close the text file fclose($f); // Open file for reading, and read the line $f = fopen("textfile.txt", "r"); // Read text echo fgets($f); fclose($f); ?> </body> </html> In the next lessons, we look at another way of storing data: databases. |Related topics in the PHP Forum||Replies||Views| |how to create a 'secure'login system using php?||0||173| |PHP Code for retrieve and write data from a table||2||5492| |Can I use PHP in CSS style file?||6||32772| |PHP Code for retrieve and write||1||5017| |How write a SCHEDULING SCRIPT(autoposting) WITH PHP||2||4142|
http://html.net/tutorials/php/lesson16.php
4.25
January 14, 2013 Study Reveals First Ever Images Of Early Tetrapod Backbone And How It Helped In Land Evolution [Watch Video: 3D X-Ray Images Of Early Tetrapod Backbone] Lawrence LeBlond for redOrbit.com - Your Universe OnlineUsing high-energy X-rays and a new data extraction protocol, an international consortium of scientists have for the first time rendered a 3D model of a prehistoric tetrapod backbone. The new reconstruction has shed new light on how the early animals moved once they made it onto land. One of the main creatures studied was a fierce-looking ichthyostega that lived from 374 — 359 million years ago and was a transitional species between aquatic and terrestrial animals. The 3D model showed researchers that these new land dwellers moved much like modern seals do. The researchers believe ichthyostega was more of a shallow water predator, navigating swamps and ponds in search of food, occasionally making landfall to perhaps feed. The researchers think the animal dragged itself across flat ground, using its front legs to push up and forward. Results of the new study have been published in this week´s issue of the journal Nature. The international study was led by Stephanie E. Pierce from The Royal Veterinary College in London and Jennifer A. Clack from the University of Cambridge. Other members of the team hailed from Sweden and France. Tetrapods are four-limbed vertebrates. In our modern world, animals such as amphibians, reptiles, birds, and mammals are all tetrapods. Early tetrapods, such as the ichthyostega, made short excursions across shallow bodies of water and perhaps even shorter jaunts over land, using their underdeveloped limbs for primitive locomotion. Just how these early tetrapods transitioned from a life at sea to land-dwelling has been a hotly debated topic among paleontologists and evolution biologists for decades. Not only do all tetrapods have four limbs, but they have backbones (vertebral column) as well. These vertebrates also include fish, from which tetrapods evolved. The backbone is formed from vertebrae connected in a row–from head to tail. But unlike the backbone of modern tetrapods, in which each vertebra is composed of only one bone, early tetrapods had vertebrae made up of multiple parts. “For more than 100 years, early tetrapods were thought to have vertebrae composed of three sets of bones - one bone in front, one on top, and a pair behind,” said Pierce. “But, by peering inside the fossils using synchrotron X-rays we have discovered that this traditional view literally got it back-to-front.” “The results of this study force us to re-write the textbook on backbone evolution in the earliest limbed animals,” Discovery News quoted Pierce as saying. To make their analysis, the team relied on the European Synchrotron Radiation Facility (ESRF) in France to scan three fossil fragments of early tetrapods. Using the X-ray scanner, details began to emerge of the fossil bones buried deep inside the rock matrix. Although the rock obscured most of the X-rays, the team was able to decipher the readings using a detailed data extraction method. "Without the new method, it would not have been possible to reveal the elements of the spine in three dimensions with a resolution of 30 micrometres" noted study coauthor Sophie Sanchez from University of Uppsala and ESRF. Between the X-ray images and the data extraction tools, the team discovered that what they believed to be the first bone (the intercentrum) was actually the last in the series. The team said that this revelation brings new insight into how the vertebral structure plays out for the functional evolution of the tetrapod backbone. "By understanding how each of the bones fit together we can begin to explore the mobility of the spine and test how it may have transferred forces between the limbs during the early stages of land movement,” noted Pierce. Aside from the backbone discovery, the team also found that the ichthyostega also had an unusual assortment of previously unknown skeletal formations including a string of bones extending down the middle of its chest. “These chest bones turned out to be the earliest evolutionary attempt to produce a bony sternum. Such a structure would have strengthened the ribcage of Ichthyostega, permitting it to support its body weight on its chest while moving about on land,” Clack explained. In continuing their research, the team said the next phase will be to further investigate how the backbone aided in the locomotion of these early tetrapodous animals. Image below shows an artist's impression of an Ichthyostega Tetrapod, with the cut-out showing the 3-D reconstruction of two vetrebrae from the study. Credit: Julia Molnar
http://www.redorbit.com/news/science/1112763058/tetrapod-backbone-early-land-evolution-011413/
4.09375
People often wonder how delicate arches and finely balanced pillars of stone stand up to the stress of holding up their own immense weight. Actually, new research suggests, it’s that stress that helps pack individual grains of sand together and slows erosion of the formations. In lab experiments, scientists dropped small blocks of loosely consolidated sandstone into water—and watched them completely fall apart as the water dissolved minerals holding the grains together. But when the scientists placed weights on top of the sandstone samples before submersing them, disintegration ceased once stress in the eroding column rose to a certain threshold that packed the sand grains into a strong, rocklike material, the researchers report online today in Nature Geoscience. In other tests, weight-induced stress similarly protected samples against complete erosion from simulated rainfall. At large scale in the real world, stress transmitted through arches and pillars to their bases (in landforms such as Delicate Arch in Utah’s Arches National Park, shown) slows down—but doesn’t stop—natural sculpting due to wind and water, the researchers say. Bits of the landform that don’t bear weight are among the first to wear away, which helps explain why arches are often unusually smooth. Cracks, fissures, and soft layers in rock formations influence the shapes these natural sculptures take as they evolve.
http://www.sciencemag.org/news/2014/07/what-keeps-stone-arches-falling-down
4.09375
Alphabet Letter S Impact Poster Preschool Lesson Plan Activities Alphabet > Letter S > Impact Letter S Poster Note: This activity counts as two activities due to the steps involved. Version for 2 & 3 year olds: *Step 1: Adult draw upper case and lower case letter on the large piece of paper (ahead of time) and print and cut the four letter pictures. Proceed with the steps below. Version for 3 & 4’s: *Step 1: Adult draws large dots to form the letters (include arrows to show direction). Child will connect the dots with paint. Proceed with the steps below. *This activity is better accomplished when the poster is placed on a wall, poster board or easel. *Encourage child to paint the letter. Explain the proper direction of “writing” the letter. Join child in the painting process. *While the paint dries, discuss some facts about the letter pictures and conduct a short activity or craft for each image, ask child to find the hidden S's in the images, emphasize on the letter sound. * Child will glue one or two pictures a day until all eight letter cards are on the poster. *Display the poster at the child’s eye level in a place he will see regularly (kitchen wall, child’s room, family room) at least for a week. *Remember to make a picture of child with his completed first letter project for his scrapbook. *Large piece of paper or tape tape four standard size to make a poster *Paint brush or sponge. Print and cut these letter pictures for the poster Letter S Impact flash cards |To view updates to these activities visit: http://www.first-school.ws/activities/alpha/s/impactsposter.htm|
http://first-school.ws/activities/alpha/s/impactsposter.htm
4
Super Plastic Both Attracts and Repels Water An odd new material could be a boon in dry regions with limited access to clean water. A new, practical method for making surfaces with patterns of areas that strongly attract and strongly repel water could lead to a highly efficient method for capturing clean water. This versatile material could also find uses in fabricating new types of devices for medical tests and chemical synthesis. Scientists have reported numerous applications of water-attracting (superhydrophilic) and water-repelling (superhydrophobic) surfaces, including fog-free eyeglasses and windshields, and self-cleaning cloth and glass. Now a group of researchers in MIT’s materials science and engineering department has combined those opposing characteristics on a single surface, by using a simple and versatile fabrication process. [For images of this new dual-quality material, click here.] Robert Cohen, Michael Rubner, and colleagues started by assembling a nano-structured film made of alternating layers of positively and negatively charged polymers and silica nanoparticles. The film’s structure and a coating of waxy fluorinated silane cause water to bead on it, forming near-perfect spheres that easily roll off. To add the superhydrophilic regions (to which water droplets cling), the researchers applied a naturally hydrophilic polymer to selected areas. In dry regions of the world, without easy access to clean water, such a material could be used for collecting water. In this application, the hydrophilic areas of the material would attract moisture in the air, collecting water drops that accumulate, until they spill over into the hydrophobic regions and roll into a collecting channel. Currently, in countries with limited access to clean water, the inhabitants typically use large polypropylene fiber meshes to harvest water from fog. The new technology “would provide a more than tenfold increase in water capture compared to the inefficient nets that are used currently,” says Andrew Parker, a biologist at Oxford University and the Natural History Museum in London, who has studied the desert beetle that inspired the MIT work. If the new material “could be added simply to the roofs of houses in areas subjected to desert fogs,” says Parker, “then a water supply could be gained with little effort.” Rubner’s lab is also taking the technique further. “When we harvest water, we have chemistry built into the hydrophilic area so that it has an antibacterial agent to kill off bacteria and other things that cause harm,” Rubner says. This decontaminates the water as it accumulates so that the collected water is safe for use. Applying this technique, the researchers have been able to kill common harmful bacteria in four minutes, he says. The coating could also find uses in biomedical applications to make microfluidic chips. Typically, microfluidic devices contain enclosed micrometer-wide channels etched into silicon, glass, or plastic plates. Then pressure or electric fields drive tiny volumes of fluids, typically nanoliters, along these channels for diagnostic tests and genetics research. For instance, to test for the presence of a certain protein in blood you could take blood in one channel and direct it to another channel containing a chemical reagent that identifies the protein. Compared with conventional microfluidics, a microfluidic chip based on the new surface would have the advantage of easier mixing, Rubner says. Right now, the chips need pumps and valves that move the liquid around to induce mixing. “In our case you can mix the liquids by just controlling the amount of liquid you put on the surface,” he says. With a pipette, you could add precise amounts of fluid into two hydrophilic grooves placed close to each other. As you add more fluid, the droplets bulge out at the edge of the grooves because of the surrounding hydrophobic area. Eventually, the bulging surfaces touch and mix. Being able to confine liquids to a small region could provide densely packed reaction sites with more control over the reaction, he says, since adjacent drops won’t mix unless they are forced to. While the exact uses of this new material are still uncertain, it opens up many possibilities, says Kenneth Wynne, a chemical engineering professor at Virginia Commonwealth University. “Patterning ultra-hydrophilic patches on a ultra-hydrophobic surface in this way is new and useful,” he says.
https://www.technologyreview.com/s/405883/super-plastic-both-attracts-and-repels-water/
4.03125
Pharisees (fârˈĭsēz) [key], one of the two great Jewish religious and political parties of the second commonwealth. Their opponents were the Sadducees, and it appears that the Sadducees gave them their name, perushim, Hebrew for "separatists" or "deviants." The Pharisees began their activities during or after the Hasmonean revolt (c.166–142 B.C.). The Pharisees upheld an interpretation of Judaism that was in opposition to the priestly Temple cult. They stressed faith in the one God; the divine revelation of the law both written and oral handed down by Moses through Joshua, the elders, and the prophets to the Pharisees; and eternal life and resurrection for those who keep the law. Pharisees insisted on the strict observance of Jewish law, which they began to codify. While in agreement on the broad outlines of Jewish law, the Pharisees encouraged debate on its fine points, and according to one view, practiced the tradition of zuggot, or pairs of scholars with opposing views. They developed the synagogue as an alternative place of worship to the Temple, with a liturgy consisting of biblical and prophetic readings, and the repetition of the shma, the basic creed of Judaism. In addition, they supported the separation of the worldly and the spiritual spheres, ceding the former to the secular rulers. Though some supported the revolt against Rome in A.D. 70, most did not. One Pharisee was Yohanan ben Zakkai, who fled to Jamnia, where he was instrumental in developing post-Temple Judaism. By separating Judaism from dependence on the Temple cult, and by stressing the direct relation between the individual and God, the Pharisees laid the groundwork for normative rabbinic Judaism. Their influence on Christianity was substantial as well, despite the passages in the New Testament which label the Pharisees "hypocrites" or "offspring of the vipers." St. Paul was originally a Pharisee. After the fall of the Temple (A.D. 70), the Pharisees became the dominant party until c.135. See L. Finkelstein, The Pharisees: The Sociological Background of Their Faith (3d ed., 2 vol., 1963); A. Finkel, The Pharisees and the Teacher of Nazareth (1964); L. Baeck, Pharisees (1947, repr. 1966); J. Neusner, From Politics to Piety (1973) and The Pharisees (1985). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
http://www.factmonster.com/encyclopedia/society/pharisees.html
4.28125
Voltage drop describes how the supplied energy of a voltage source is reduced as electric current moves through the passive elements (elements that do not supply voltage) of an electrical circuit. Voltage drops across internal resistances of the source, across conductors, across contacts, and across connectors are undesired; supplied energy is lost (dissipated). Voltage drops across loads and across other active circuit elements are desired; supplied energy performs useful work. For example, an electric space heater may have a resistance of ten ohms, and the wires which supply it may have a resistance of 0.2 ohms, about 2% of the total circuit resistance. This means that approximately 2% of the supplied voltage is lost in the wire itself. Excessive voltage drop may result in unsatisfactory operation of, and damage to, electrical and electronic equipment. National and local electrical codes may set guidelines for the maximum voltage drop allowed in electrical wiring, to ensure efficiency of distribution and proper operation of electrical equipment. The maximum permitted voltage drop varies from one country to another. In electronic design and power transmission, various techniques are employed to compensate for the effect of voltage drop on long circuits or where voltage levels must be accurately maintained. The simplest way to reduce voltage drop is to increase the diameter of the conductor between the source and the load, which lowers the overall resistance. In power distribution systems, a given amount of power can be transmitted with less voltage drop if a higher voltage is used. More sophisticated techniques use active elements to compensate for the undesired voltage drop. Voltage drop in direct-current circuits: resistance Consider a direct-current circuit with a nine-volt DC source; three resistors of 67 ohms, 100 ohms, and 470 ohms; and a light bulb—all connected in series. The DC source, the conductors (wires), the resistors, and the light bulb (the load) all have resistance; all use and dissipate supplied energy to some degree. Their physical characteristics determine how much energy. For example, the DC resistance of a conductor depends upon the conductor's length, cross-sectional area, type of material, and temperature. If the voltage between the DC source and the first resistor (67 ohms) is measured, the voltage potential at the first resistor will be slightly less than nine volts. The current passes through the conductor (wire) from the DC source to the first resistor; as this occurs, some of the supplied energy is "lost" (unavailable to the load), due to the resistance of the conductor. Voltage drop exists in both the supply and return wires of a circuit. If the voltage across each resistor is measured, the measurement will be a significant number. That represents the energy used by the resistor. The larger the resistor, the more energy used by that resistor, and the bigger the voltage drop across that resistor. Ohm's Law can be used to verify voltage drop. In a DC circuit, voltage equals current multiplied by resistance. . Also, Kirchhoff's circuit laws state that in any DC circuit, the sum of the voltage drops across each component of the circuit is equal to the supply voltage. Voltage drop in alternating-current circuits: impedance In alternating-current circuits, opposition to current flow does occur because of resistance (just as in direct-current circuits). Alternating current circuits also present a second kind of opposition to current flow: reactance. This "total" opposition (resistance "plus" reactance) is called impedance. The impedance in an alternating-current circuit depends on the spacing and dimensions of the elements and conductors, the frequency of the alternating current, and the magnetic permeability of the elements, the conductors, and their surroundings. The voltage drop in an AC circuit is the product of the current and the impedance (Z) of the circuit. Electrical impedance, like resistance, is expressed in ohms. Electrical impedance is the vector sum of electrical resistance, capacitive reactance, and inductive reactance. It is expressed by the formula , analogous to Ohm's law for direct-current circuits. - Utility brownout - Voltage divider - Electrical distribution - Electrical resistance - Kirchhoff's voltage law - Electrical conduction - Ground loop (electricity) - Power cable - Mesh analysis - Electrical Principles for the Electrical Trades (Jim Jennesson) 5th edition
https://en.wikipedia.org/wiki/Voltage_drop
4.03125
This lesson will look at the partisan political issues which emerged in the election of 1864 around Abraham Lincoln's role as a wartime president. Through an examination of primary documents, students will focus on Lincoln's suspension of habeas corpus, the Emancipation Proclamation, his decision to arm the freed slaves, his refusal to accept a compromise peace with the South, and the election of 1864. Popular sovereignty allowed the settlers of a federal territory to decide the slavery question without interference from Congress. This lesson plan will examine how the Kansas–Nebraska Act of 1854 affected the political balance between free and slave states and explore how its author, Stephen Douglas, promoted its policy of popular sovereignty in an effort to avoid a national crisis over slavery in the federal territories. In reviewing events, documentary evidence, and biographical information, students come to understand the complex nature of political decision-making in the United States. In this lesson, they consider the momentous questions facing the country during the Reconstruction debate by weighing the many factors that went into the solutions offered. Students also think critically as they consider whether and how other solutions might have played out. In this lesson, students examine the development of new constitutions in the reconstructed South. They also consider the political and social realities created by a dramatically changed electorate. In gaining a firmer grasp of the causes for the shifting alliances of this time, students see how far-reaching the consequences of the Civil War and Reconstruction era were and how much these events continue to shape our collective destiny today. In this lesson students will learn about Abraham Lincoln the individual and the President. By examining Alexander Gardner's February 5, 1865 photograph and reading a short biography of Lincoln, students will consider who the man on the other side of the lens was. Students will demonstrate their understanding by writing an "I Am" Poem and creating their own multimedia portrait of Lincoln. The focus of this lesson is the Robert Gould Shaw and the Massachusetts 54th Regiment Memorial by Augustus Saint-Gaudens. Students will put themselves in the shoes of the men of the Massachusetts 54th Regiment as they read, write, pose, and then create a comic strip about these American heroes. The newly re-elected Abraham Lincoln sought to unite the American people by interpreting the waning conflict as a divine judgment upon both sides of the war. This lesson will examine Lincoln's Second Inaugural Address to determine how he sought to reunite a divided country through a providential interpretation of the Civil War. This lesson will examine the economic, military and diplomatic strengths and weaknesses of the North and South on the eve of the Civil War. In making these comparisons students will use maps and read original documents to decide which side, if any, had an overall advantage at the start of the war. Abraham Lincoln felt that the attempt of seven states to leave the American union peacefully was, in fact, a total violation of law and order. This lesson will examine Lincoln's First Inaugural Address to understand why he thought his duty as president required him to treat secession as an act of rebellion and not a legitimate legal or constitutional action by disgruntled states. This unit explores the political thought of Abraham Lincoln on the subject of American union. For him, the union was not just a structure to govern the national interests of American states; it also represented a consensus about the future of freedom in America—a future where slavery would eventually be eliminated and liberty protected as the birthright of every human being. Students will examine Lincoln's three most famous speeches—the Gettysburg Address and the First and Second Inaugural Addresses—in addition to a little known fragment on the Constitution, union, and liberty to see what they say regarding the significance of union to the prospects for American self-government. Although Lincoln did not attend high school or college, he possessed a logical and inquisitive mind that found clarity in working out legal and political problems on paper. One fragment he wrote after the 1860 presidential election addressed how the Constitution and union were informed by the ideals of the Declaration of Independence. Lincoln wrote that while America's prosperity was dependent upon the union of the states, "the primary cause" was the principle of "Liberty to all." He believed this central ideal of free government embraced all human beings, and concluded that the American revolution would not have succeeded if its goal was "a mere change of masters." For Lincoln, union meant a particular kind of government of the states, one whose equality principle "clears the path for all—gives hope to all—and, by consequence, enterprize, and industry to all." As president of the United States, Lincoln used his First and Second Inaugural Addresses to explore the meaning of the American union in the face of a divided country. Upon assuming the presidency for the first time, he spoke at length about the nature of union, why secession was antithetical to self-government, and how the federal constitution imposed a duty upon him to defend the union of the states from rebellious citizens. When he was reelected four years later, and as the Civil War drew to a close, Lincoln transcended both Northern triumphalism and Southern defiance by offering a providential reading of the war and emancipation in hopes of reuniting the country. In his most famous speech, delivered upon the dedication of a national cemetery at the battlefield in Gettysburg, Pennsylvania, Lincoln gave a brief but profound meditation on the meaning of the Civil War and American union. With the Emancipation Proclamation as a new and pivotal development of the federal war effort, Lincoln sought to explain why the war to preserve the Union had to become a war to secure the freedom of former slaves. The nation would need to experience "a new birth of freedom" so that "government of the people, by the people, for the people, shall not perish from the earth." Upon completing this unit, students should have a better understanding of why Lincoln revered the union of the American states as "the last best, hope of earth." If your students lack experience in dealing with primary sources, you might use one or more preliminary exercises to help them develop these skills. The Learning Page at the American Memory Project of the Library of Congress includes a set of such activities. Another useful resource is the Digital Classroom of the National Archives, which features a set of Document Analysis Worksheets. Each lesson in this unit is designed to stand alone; taken together they present a robust portrait of how Lincoln viewed the American union. If there is not sufficient time to use all four lessons in the unit, either the first or third lesson convey Lincoln's understanding of the American union as a means to securing "Liberty to all"—with the first lesson focusing on the principled connection between the Declaration of Independence and the U.S. Constitution, and the third lesson addressing the practical connection between the Union war effort, the freedom of the newly emancipated slaves, and the preservation of American self-government. Adding the second lesson would show why Lincoln's understanding of the union and Constitution obliged the president to defend the nation from secession. Adding the fourth lesson would explore how Lincoln thought that only a common memory of the war as the chastening of God to both sides for the national (not Southern) sin of slavery could restore national unity.
http://edsitement.neh.gov/category/subject-areas/history-and-social-studies/us/civil-war-and-reconstruction-1850-1877?page=22
4
Where does our food come from? How is the climate changing what is on our plates? And, with a global population set to hit nine billion by 2050, how will we make sure everyone has enough to eat? These are just some of the questions pupils can explore through Food for Thought, Oxfam's new global citizenship resource for schools, now available on the Guardian Teacher Network. Can You Beat the System is a role-play task that encourages primary and secondary pupils to put themselves in the shoes of small-scale farmers working in less economically developed countries. Teams must work together to "produce" crops in the face of challenges from the weather, governments and traders. The aim is to explore factors affecting people's ability to grow food while highlighting inequalities in the global food system. Similar themes are covered in the primary lesson Farming Snakes and Ladders and the secondary lesson Farming Heroes . Both activities encourage pupils to identify the challenges faced by small-scale farmers and to suggest ways in which these can be overcome. The benefits of small-scale farming to the local community and wider world are also considered. Diet and Climate Change is an activity for primary and secondary pupils that explores the impact of climate change on the diet and income of an Ethiopian family who make their living by farming. Pupils consider the ways families can adapt to problems such as drought, and who should be helping them to do this. Grow Island is a role-play activity that examines issues of fairness and sustainability related to land ownership. Primary pupils are encouraged to think about the importance of land and why demand for it can be very high. For secondary pupils, Geography Mystery in Tanzania is a group work activity that looks at land purchases specifically for bio-fuels. Pupils are asked to consider who might benefit from this sort of investment and what some of the longer-term problems might be. The Power-Shift is a differentiated activity for primary and secondary pupils that aims to boost understanding of who's who in the global food system. Pupils consider the different groups in society who are able to make things fairer and the relative power each one has to bring about change. The activity is set in the context of something that pupils would like to change about their own school. Throughout the Food for Thought activities, pupils are encouraged to learn, think and take action as active global citizens. The action planning guide supports pupils in choosing a course of action, planning how to implement it and evaluating its success. The resources are supported by a Food for Thought wall chart that can be used to track pupils' learning and includes a teachers' guide to all of the activities. The Guardian Teacher Network has almost 100,000 pages of lesson plans and interactive materials. To see and share for yourself go to teachers.theguardian.com. There are also hundreds of jobs on the site, contact us for a free trial of your first advert: schoolsjobs.theguardian.com.
http://www.theguardian.com/education/2012/jan/16/oxfam-food-for-thought-resources
4.28125
El país nórdico lidera el informe PISA con una enseñanza gratuita que pone en Primaria a los profesores más preparados Los niños finlandeses de hoy estarán el día de mañana entre los mejores profesionales ... It often occurs that we get confused with the concept of prior knowledge and its relationship to construction of new learning. It would only seem logical to always find out what the students know before delivering a class or a course of any discipline. However, the difference resides in what information we would be looking for and the purpose of retrieving that data. When we work with CLIL projects it is necessary to spend enough time in the process of exploring previous knowledge through different tools. The links that students can make to their personal experience and lives, the hypotheses and ideas they may have incidentally acquired about a certain topic will all contribute to set their always curious minds to work. Lev Vygotsky said " Learning always proceeds from the known to the new. Good teaching will recognize and build on this connection." Some tips to explore prior knowledge: -Use various tools individually or in groups such as: incomplete phrases or sentences, brainstorming, short multiple choice questionnaires, graphic organizers, cartoons, short videos, pictures, parts of stories and others. -Accept all the opinions without judging or correcting, stating that you are in an exploratory stage and that all ideas will be welcomed. -Keep a record of students' ideas to use at a later stage. -Refrain from correcting or indicating the right response. -Use your observations and collected information to decide on the project's future path. Good CLIL lessons should initiate by favoring risk taking to express ideas through drawings, writings and brainstorming allowing for different views and tolerating wrong or hilarious answers avoiding any judgment. It will be throughout the process of experiencing the unit/project that the students together with appropriate teacher's interventions and class discussions will be able to reflect on their own ideas. Teacher's tolerance, observation and confidence in students' possibilities are of crucial importance to set the atmosphere of high challenge and high support classrooms. Many CLIL projects or units would fit into a constructivist perspective if they were seriously "meaning oriented". One of the most common errors of some publications that present themselves under the "CLIL" umbrella is that they don't offer real problems or questions to be solved by the students. In those cases, information is just correlated around a certain "topic". Arriving to integration through a good leading question is one of the first important steps to make when planning a CLIL didactic unit or project. Jerome Bruner said: "The art of asking provoking questions is at least as important as that of providing clear answers [...], and the art of setting those questions to good use and keeping them alive is as important as the first two." Here are some tips to come up with a good question: -Avoid simple “yes-no” questions -The question will need reasoning and some research to be answered -It will relate to curricular guidelines and to students´ lives -It will motivate students to read, write, think and speak Can the world feed 10 billion people? Do revolutions always work? Do all animals have hearts? Why do animals travel? Why did humans lose their fur? Constructivism provides a strong rationale for content-based curricula such as CLIL, since it is holistically oriented and meaning seeking based. Then, LET'S START OUR CLIL PROJECTS WITH A GOOD QUESTION!!!! Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers. How to integrate my topics' content to my website? Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work. Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility. Creating engaging newsletters with your curated content is really easy.
http://www.scoop.it/t/teaching-english-from-a-constructivist-perspective/?tag=Education
4.0625
a relative clause that modifies a noun or pronoun, as the clause that I told you about in This is the book that I told you about and who saw us in It was she who saw us. An adjective clause = a relative pronoun or relative adverb + subject + verb OR a relative pronoun or relative adverb + verb. Adjectives modify nouns and pronouns, giving a description or more information. An adjective clause is simply a group of words with a subject and a verb that ... An adjective clause is a dependent clause that functions as an adjective in the sentence. Adjective clauses can also be called relative clauses. An adjective ... On these occasions we use subordination to indicate that one part of a sentence is secondary (or subordinate) to another part. For example, to emphasize that father sets his unicorn traps at night, we can turn the first main clause into an adjective clause My father, ... More » (Dependent clauses are also called subordinate clauses.) There are three basic types of dependent clauses: adjective clauses, adverb clauses, and noun ... Adjective clauses, like adverb clauses, are introduced by dependent signals. ... ( Again, the adjective clause is underlined and modifies the subject "students."). An adjective clause usually comes after the noun it modifies and is made up of ... See the definition of Adjective Clause in Grammar Monster's list of grammar ... An adjective clause is a group of words with a subject and verb that modifies a noun in a sentence. In this lesson, we will learn how adjective... In addition to subject-pattern adjective clauses, there are also object-pattern ones . They have that name because in them, the relative pronoun replaces the ...
http://www.ask.com/web?qsrc=6&o=102140&oo=102140&l=dir&gc=1&qo=popularsearches&ad=dirN&q=Adjective+Clause
4.09375
Harmonic series (music) Pitched musical instruments are often based on an approximate harmonic oscillator such as a string or a column of air, which oscillates at numerous frequencies simultaneously. At these resonant frequencies, waves travel in both directions along the string or air column, reinforcing and canceling each other to form standing waves. Interaction with the surrounding air causes audible sound waves, which travel away from the instrument. Because of the typical spacing of the resonances, these frequencies are mostly limited to integer multiples, or harmonics, of the lowest frequency, and such multiples form the harmonic series (see harmonic series (mathematics)). The musical pitch of a note is usually perceived as the lowest partial present (the fundamental frequency), which may be the one created by vibration over the full length of the string or air column, or a higher harmonic chosen by the player. The musical timbre of a steady tone from such an instrument is determined by the relative strengths of each harmonic. Partial, harmonic, fundamental, inharmonicity, and overtone A "complex tone" (the sound of a note with a timbre particular to the instrument playing the note) "can be described as a combination of many simple periodic waves (i.e., sine waves) or partials, each with its own frequency of vibration, amplitude, and phase." (See also, Fourier analysis.) A partial is any of the sine waves (or "simple tones", as Ellis calls them when translating Helmholtz) of which a complex tone is composed. A harmonic is any member of the harmonic series, an ideal set of frequencies that are positive integer multiples of a common fundamental frequency. The fundamental is also considered a harmonic because it is 1 times itself. A harmonic partial is any real partial component of a complex tone that matches (or nearly matches) an ideal harmonic. An inharmonic partial is any partial that does not match an ideal harmonic. Inharmonicity is a measure of the deviation of a partial from the closest ideal harmonic, typically measured in cents for each partial. Many pitched acoustic instruments are designed to have partials that are close to being whole-number ratios with very low inharmonicity; therefore, in music theory, and in instrument design, it is convenient, although not strictly accurate, to speak of the partials in those instruments' sounds as "harmonics", even though they have some inharmonicity. Other pitched instruments, especially certain percussion instruments, such as marimba, vibraphone, tubular bells, and timpani, contain mostly inharmonic partials, yet may give the ear a good sense of pitch because of a few strong partials that resemble harmonics. Unpitched, or indefinite-pitched instruments, such as cymbals, gongs, or tam-tams make sounds (produce spectra) that are rich in inharmonic partials (make "noise") and give no impression of implying any particular pitch. An overtone is any partial except the lowest partial. The term overtone does not imply harmonicity or inharmonicity and has no other special meaning other than to exclude the fundamental. It is the relative strengths of the different overtones that gives an instrument its particular timbre, tone color, or character. When writing or speaking of overtones and partials numerically, care must be taken to designate each correctly to avoid any confusion of one for the other, so the second overtone may not be the third partial, because it is second sound in series. Some electronic instruments, such as theremins and synthesizers, can play a pure frequency with no overtones (a sine wave). Synthesizers can also combine pure frequencies into more complex tones, such as to simulate other instruments. Certain flutes and ocarinas are very nearly without overtones. Frequencies, wavelengths, and musical intervals in example systems The simplest case to visualise is a vibrating string, as in the illustration; the string has fixed points at each end, and each harmonic mode divides it into 1, 2, 3, 4, etc., equal-sized sections resonating at increasingly higher frequencies. Similar arguments apply to vibrating air columns in wind instruments, although these are complicated by having the possibility of anti-nodes (that is, the air column is closed at one end and open at the other), conical as opposed to cylindrical bores, or end-openings that run the gamut from no flare (bell), cone flare (bell), or exponentially shaped flares (bells). In most pitched musical instruments, the fundamental (first harmonic) is accompanied by other, higher-frequency harmonics. Thus shorter-wavelength, higher-frequency waves occur with varying prominence and give each instrument its characteristic tone quality. The fact that a string is fixed at each end means that the longest allowed wavelength on the string (which gives the fundamental frequency) is twice the length of the string (one round trip, with a half cycle fitting between the nodes at the two ends). Other allowed wavelengths are 1/2, 1/3, 1/4, 1/5, 1/6, etc. times that of the fundamental. Theoretically, these shorter wavelengths correspond to vibrations at frequencies that are 2, 3, 4, 5, 6, etc., times the fundamental frequency. Physical characteristics of the vibrating medium and/or the resonator it vibrates against often alter these frequencies. (See inharmonicity and stretched tuning for alterations specific to wire-stringed instruments and certain electric pianos.) However, those alterations are small, and except for precise, highly specialized tuning, it is reasonable to think of the frequencies of the harmonic series as integer multiples of the fundamental frequency. The harmonic series is an arithmetic series (1×f, 2×f, 3×f, 4×f, 5×f, ...). In terms of frequency (measured in cycles per second, or hertz (Hz) where f is the fundamental frequency), the difference between consecutive harmonics is therefore constant and equal to the fundamental. But because human ears respond to sound nonlinearly, higher harmonics are perceived as "closer together" than lower ones. On the other hand, the octave series is a geometric progression (2×f, 4×f, 8×f, 16×f, ...), and people hear these distances as "the same" in the sense of musical interval. In terms of what one hears, each octave in the harmonic series is divided into increasingly "smaller" and more numerous intervals. The second harmonic, whose frequency is twice of the fundamental, sounds an octave higher; the third harmonic, three times the frequency of the fundamental, sounds a perfect fifth above the second harmonic. The fourth harmonic vibrates at four times the frequency of the fundamental and sounds a perfect fourth above the third harmonic (two octaves above the fundamental). Double the harmonic number means double the frequency (which sounds an octave higher). Harmonics and tuning If the harmonics are transposed into the span of one octave, some of them are approximated by the notes of what the West has adopted as the chromatic scale based on the fundamental tone. The Western chromatic scale has been modified into twelve equal semitones, which is slightly out of tune with many of the harmonics, especially the 7th, 11th, and 13th harmonics. In the late 1930s, composer Paul Hindemith ranked musical intervals according to their relative dissonance based on these and similar harmonic relationships. Below is a comparison between the first 31 harmonics and the intervals of 12-tone equal temperament (12TET), transposed into the span of one octave. Tinted fields highlight differences greater than 5 cents (1/20th of a semitone), which is the human ear's "just noticeable difference" for notes played one after the other (smaller differences are noticeable with notes played simultaneously). |Harmonic||12tET Interval||Note||Variance cents| |17||minor second||C♯, D♭||+5| |19||minor third||D♯, E♭||−2| |25||minor sixth||G♯, A♭||−27| |7||14||28||minor seventh||A♯, B♭||−31| The frequencies of the harmonic series, being integer multiples of the fundamental frequency, are naturally related to each other by whole-numbered ratios and small whole-numbered ratios are likely the basis of the consonance of musical intervals (see just intonation). This objective structure is augmented by psychoacoustic phenomena. For example, a perfect fifth, say 200 and 300 Hz (cycles per second), causes a listener to perceive a combination tone of 100 Hz (the difference between 300 Hz and 200 Hz); that is, an octave below the lower (actual sounding) note. This 100 Hz first-order combination tone then interacts with both notes of the interval to produce second-order combination tones of 200 (300 – 100) and 100 (200 – 100) Hz and all further nth-order combination tones are all the same, being formed from various subtraction of 100, 200, and 300. When one contrasts this with a dissonant interval such as a tritone (not tempered) with a frequency ratio of 7:5 we get, for example, 700 – 500 = 200 (1st order combination tone) and 500 – 200 = 300 (2nd order). The rest of the combination tones are octaves of 100 Hz so the 7:5 interval actually contains 4 notes: 100 Hz (and its octaves), 300 Hz, 500 Hz and 700 Hz. Note that the lowest combination tone (100 Hz) is a 17th (2 octaves and a major third) below the lower (actual sounding) note of the tritone. All the intervals succumb to similar analysis as has been demonstrated by Paul Hindemith in his book The Craft of Musical Composition, although he rejected the use of harmonics from the 7th and beyond. Timbre of musical instruments |This section needs additional citations for verification. (November 2011)| The relative amplitudes (strengths) of the various harmonics primarily determine the timbre of different instruments and sounds, though onset transients, formants, noises, and inharmonicities also play a role. For example, the clarinet and saxophone have similar mouthpieces and reeds, and both produce sound through resonance of air inside a chamber whose mouthpiece end is considered closed. Because the clarinet's resonator is cylindrical, the even-numbered harmonics are less present. The saxophone's resonator is conical, which allows the even-numbered harmonics to sound more strongly and thus produces a more complex tone. The inharmonic ringing of the instrument's metal resonator is even more prominent in the sounds of brass instruments. Human ears tend to group phase-coherent, harmonically-related frequency components into a single sensation. Rather than perceiving the individual partials–harmonic and inharmonic, of a musical tone, humans perceive them together as a tone color or timbre, and the overall pitch is heard as the fundamental of the harmonic series being experienced. If a sound is heard that is made up of even just a few simultaneous sine tones, and if the intervals among those tones form part of a harmonic series, the brain tends to group this input into a sensation of the pitch of the fundamental of that series, even if the fundamental is not present. Variations in the frequency of harmonics can also affect the perceived fundamental pitch. These variations, most clearly documented in the piano and other stringed instruments but also apparent in brass instruments, are caused by a combination of metal stiffness and the interaction of the vibrating air or string with the resonating body of the instrument. David Cope (1997) suggests the concept of interval strength, in which an interval's strength, consonance, or stability (see consonance and dissonance) is determined by its approximation to a lower and stronger, or higher and weaker, position in the harmonic series. See also: Lipps–Meyer law. Thus, an equal-tempered perfect fifth ( play (help·info)) is stronger than an equal-tempered minor third ( play (help·info)), since they approximate a just perfect fifth ( play (help·info)) and just minor third ( play (help·info)), respectively. The just minor third appears between harmonics 5 and 6 while the just fifth appears lower, between harmonics 2 and 3. |Wikimedia Commons has media related to Harmonic series.| - Fourier series - Klang (music) - Otonality and Utonality - Piano acoustics - Scale of harmonics - Stretched tuning - Undertone series - IEV 1994, sound: http://www.electropedia.org/iev/iev.nsf/display?openform&ievref=801-21-01 - Ibid, fundamental: http://www.electropedia.org/iev/iev.nsf/display?openform&ievref=801-30-01 - William Forde Thompson (2008). Music, Thought, and Feeling: Understanding the Psychology of Music. p. 46. ISBN 978-0-19-537707-1. - John R. Pierce (2001). "Consonance and Scales". In Perry R. Cook. Music, Cognition, and Computerized Sound. MIT Press. ISBN 978-0-262-53190-0. - Martha Goodway and Jay Scott Odell (1987). The Historical Harpsichord Volume Two: The Metallurgy of 17th- and 18th- Century Music Wire. Pendragon Press. ISBN 978-0-918728-54-8. - Riemann by Shedlock (1876). Dictionary of Music. Augener & Co., London. p. 143. let it be understood, the second overtone is not the third tone of the series, but the second. - Juan G. Roederer (1995). The Physics and Psychophysics of Music. p. 106. ISBN 0-387-94366-8. - Fonville, John. 1991. "Ben Johnston's Extended Just Intonation: A Guide for Interpreters", p.121. Perspectives of New Music 29, no. 2 (Summer): 106–37. - Hindemith, Paul (1942). The Craft of Musical Composition: Book 1—Theoretical Part,[page needed]. Translated by Arthur Mendel (London: Schott & Co; New York: Associated Music Publishers. ISBN 0901938300). . - Cope, David (1997). Techniques of the Contemporary Composer, p. 40–41. New York, New York: Schirmer Books. ISBN 0-02-864737-8. - Interaction of reflected waves on a string is illustrated in a simplified animation - A Web-based Multimedia Approach to the Harmonic Series - Importance of prime harmonics in music theory - "Addendum to 'The Devolution of the Shepherd Trumpet and It's Seminal Importance in Music History'" (link at bottom of page). Describes how the harmonic series is the basis of European folk song melodies - Octave Frequency Sweep, Consonance & Dissonance - The combined oscillation of a string with several of its lowest harmonics can be seen clearly in an interactive animation from Edward Zobel's "Zona Land" (requires plugin).
https://en.wikipedia.org/wiki/Overtone_series
4
This week in history: The Fall of Constantinople had profound consequences On May 29, 1453 — 560 years ago this week — Constantinople fell to the Ottoman Turks. The fall of this great city signaled the end of the Byzantine Empire, the medieval incarnation of the Roman Empire, and saw the armies of Islam spread into Europe from Asia for the first time. In A.D. 330, the Roman Emperor Constantine founded the city of Constantinople on the Greek village of Byzantine to be the new imperial capital. Sitting on the Bosporus strait, which connects Europe and Asia, the new city was more easily defended than Rome, and it was a Christian city to reflect the emperor's religious preference. Like Rome, Constantinople had seven hills divided into 14 districts. For centuries, the city stood as the center of imperial power, even after the fall of the Western Roman Empire in A.D. 476. Historians refer to this medieval incarnation of the empire as Byzantine. The Franks and the Italians of the time referred to its inhabitants simply as “the Greeks.” The inhabitants themselves, however, continued to refer to themselves as Romans, and saw their emperors as the literal successors to Augustus, Marcus Aurelius and Constantine. Containing impressive city walls, Constantinople was virtually impervious to attack, such as when an army of Goths approached the city after the battle of Adrianople in A.D. 378. After the rise of Islam, the Byzantine empire lost much of its territory in the Middle East and North Africa, but the city of Constantinople proved an impervious rock upon which wave after wave of Muslim armies couldn't break. As Constantinople held the line against Islam in the East, modern Western civilization developed in France and Western Europe. Though the Franks had defeated Islamic armies from Spain, the loss of Byzantine to Islam may well have seen the creation of a Muslim Europe. Toward the end of the Middle Ages, however, Byzantine power was waning considerably. Practicing Orthodox Christianity, Constantinople had fallen to Catholic knights during the Fourth Crusade in 1204, ushering in nearly 60 years of Catholic rule before an Orthodox emperor was able to retake the throne. The mid-14th century saw the Black Death claim the lives of perhaps half the city's population. By the early 15th century, the Islamic Ottoman Turks had conquered virtually all of present day Turkey, and the Byzantine empire was a shadow of its former self, consisting of a few scattered territories and islands outside of Constantinople itself. In 1451, Mehmed II succeeded his father to become the Ottoman sultan. In his book “1453: The Holy War for Constantinople and the Clash of Islam and the West,” historian Roger Crowley described the 19-year-old ruler: “The man whom the Renaissance later presented as a monster of cruelty and perversion was a mass of contradictions. He was astute, brave and highly impulsive — capable of deep deception, tyrannical cruelty and acts of sudden kindness. He was moody and unpredictable, a bisexual who shunned close relationships, never forgave an insult, but who came to be loved for his pious foundations.” Upon becoming sultan, Mehmed immediately began a new building program for his navy, and soon set about plans to do something that the many sultans before him couldn't — the conquest of Constantinople. In early 1453, he took an army of somewhere between 100,000 and 200,000 Ottoman troops into Byzantine territory, and on April 6 began major siege operations against the city. Constantine XI proved to be the last of the Byzantine emperors. Having ruled since 1449, Constantine knew the empire's defenses alone, including more than 12 miles of walls, were not enough to repel a determined Ottoman siege or assault. - Over 100 missing, 14 dead as strong quake... - 5 things to know about the Super Bowl and sex... - Rubio faces barrage of attacks in Republican... - Search for possible survivors of midair crash... - US adds just 151k jobs in January; 4.9... - Police think 6 Chicago deaths result of... - Pope's sex abuse panel tells survivor to take... - Lindsey Vonn wins women's World Cup downhill - US adds just 151k jobs in January; 4.9... 28 - Cruz tops Trump in Iowa; Clinton,... 22 - Obama pushes for every child to learn... 21 - Cruz, Rubio eye NH momentum as Dems... 15 - Donald Trump calls for Iowa election... 14 - Americans are packing classes in how to... 12 - Kentucky Sen. Rand Paul quits 2016 GOP... 12 - Once a bromance, now a brawl: Trump and... 10
http://www.deseretnews.com/article/865580842/This-week-in-history-The-Fall-of-Constantinople-had-profound-consequences.html
4.0625
Assessment for Learning This Page was created by Bryan Funk (2009) This Page was edited by Kari Duffy (January 2010) - 1 Assessment for Learning - 2 Six Big Strategies that Matter - 3 Assessment for Learning vs. Assessment of Learning - 4 Role of the Student - 5 Role of the Teacher - 6 Feedback for the Student - 7 Criterion-Referenced Assessment - 8 Setting and Using Criteria - 9 Planning with Assessment in Mind - 10 Stop Motion - 11 References Assessment for Learning Assessment for learning focuses on engaging students in classroom assessment in support of their own learning and informing teachers about what to do next to help students to progress. Assessment for learning is assessment for improvement not assessment for accountability as can be the case with summative assessments (Stiggins, 2002). The keys to Assessment for Learning (AFL) is to use a variety of assessment tools and methods in order to provide ongoing evidence to students, teachers and parents that demonstrates how well each student is mastering the identified outcomes. This evidence is used to provide descriptive feedback to the students and to enable the teacher to differentiate the instruction to meet the needs of individual students or groups. Black and William clearly indicate that formative assessment (AFL) will raise performance standards and improve overall student success (Black & William, 1998). In his book, Talk About Assessment: Strategies and Tools to Improve Learning, Damian Cooper (2007) defines Assessment for Learning as "assessment designed primarily to promote learning. Early drafts, first tries, and practice assignments are all examples of assessment for learning", and describes Assessment of Learning as "assessment designed primarily to determine student achievement at a given point in time. Report card grades should be comprised of data from assessments of learning". Cooper's (2007) first chapter introduces the educator to Eight Big Ideas about assessment: Big Idea 1 Assessment serves different purposes at different times: it may be used to find out what students already know and can do; it may be used to help students improve their learning; or it may be used to let students and their parents know how much they have learned in a prescribed period of time. Big Idea 2 Assessment must be planned and purposeful. Big Idea 3 Assessments must be balanced, including oral, performance, and written tasks, and be flexible in order to improve learning for all students. Big Idea 4 Assessment and instruction are inseparable because effective assessment informs learning. Big Idea 5 For assessment to be helpful to students, it must inform them in words, not numerical scores or letter grades, what they have done well, what they have done poorly, and what they need to do next in order to improve. Big Idea 6 Assessment is a collaborative process that is most effective when it involves self-, peer, and teacher assessment. Big Idea 7 Performance standards are an essential component of effective assessment. Big Idea 8 Grading and reporting student achievement is a caring, sensitive process that requires teachers' professional judgement. Six Big Strategies that Matter Black and Wiliam's work led to the development of five performance strategies for assessment for learning; these five have since been re-structured by Dr. Linda Kaser and Dr. Judy Halbert into six big strategies that matter and are described below (Koehn, 2008 p. 2). 1. Providing learners with clarity about and understanding of the learning intentions of the work being done (learners are presented the learning intentions at the beginning of the lesson, throughout the lesson, and refer to the learning intentions in their reflections and responses so teachers can see that connections between tasks and what is supposed to be learned are made) 2. Providing to and co-developing with learners the criteria for success (what will the finished task look like, how will you share your understandings with others?) 3. Providing ongoing descriptive feedback that moves learning forward for each learner (using feed forward in language the students understand; how can the next task improve upon the previous?) 4. Designing and using thoughtful classroom questions to lead discussions that generate evidence of learning (allowing the students to participate and interact amongst each other in meaningful oral discussion – talk is student to student(s), not a dialogue between teacher and one student) 5. Putting learners to work as learning/teaching resources for each other using self and peer assessment (student coaching, students understanding learning intentions so well that they can teach a younger student or peer) 6. Doing everything we can think of to make sure that learners have ownership of their own learning (empowering each student to succeed). Assessment for Learning vs. Assessment of Learning Gregory, Cameron, and Davies (1997) outline some distinct differences between Assessment for Learning and Assessment of Learning. Educators are using these terms to help distinguish between the teacher's role as a learning coach versus the teacher's role of judging the extent of a student's achievement in relation to an established standard. This assessment is considered summative and is done at the end. 1. Assessment for learning is the big deal, while assessment of learning is the done deal. 2. Assessment for learning is formative, while assessment of learning is summative. 3. Assessment for learning is supportive, while assessment of learning measures. 4. Assessment for learning uses descriptions, while assessment of learning uses scores. 5. Assessment for learning happens day by day, moment by moment, while assessment of learning happens at the end. The assertion is that neither one is better than the other, but both need to be used within a students learning so that the student is able to understand not only the work that is being asked of them, but also how their own learning occurs. Assessment for learning is intended to be both diagnostic and formative to help students improve their learning. (chart from Anne Davies website: http://www.annedavies.com/assessment_for_learning_tr_tjb.html) Role of the Student Students are involved in identifying achievement expectations from the beginning of the learning by studying exemplars of strong and weak work. It is very important that students have a clear understanding of the learning intentions and expected outcomes of the work they are being asked to do. Assessment for learning will make the student’s learning visible and will enable both the teacher and learner to reflect and adjust the learning process. The learners play an important role in developing and understanding the scaffolding they will be climbing as they approach those outcomes. Students partner with their teacher to continuously monitor their current level of attainment in relation to agreed-upon expectations so they can set goals for what to learn next and thus play a role in managing their own progress. Students are asked to communicate evidence of learning to one another, to their teacher, and to their families, and they do so along the entire learning journey, not only at the end. the learning, students are inside the assessment process, watching themselves grow, feeling in control of their success, and believing that continued success is within reach if they keep trying. Role of the Teacher Assessment for learning not only provides reflective feedback to guide the learning process, but empowers students to control and dictate the direction of their learning. Purposeful use of AFL will enable students to experience metacognition where they engage and reflect on their learning experience. “Intelligent thought involves 'metacognition' or self monitoring of learning and thinking" (Shepard, 2000. p. 8). Although much of assessment for learning is about empowering the student to understand and take control of their learning, the teacher plays a critical role in chosing the appropriate assessments and using them to differentiate instruction to meet the individual needs of the students. The teacher is responsible for aligning instruction with the targeted outcomes, selecting and adapting materials and resources and identifying specific learning requirements of students or groups or students. Once the teacher has collected begun collecting information through assessment their focus becomes creating differentiated teaching strategies and learning opportunities in order to assist individual students move forward in their learning. This is partially accomplished by providing immediate feedback and direction to students and then completing the cycle again. In Rethinking Classroom Assessment: Assessment for Learning, Assessment as Learning, Assessment of Learning there are four critical questions that the teacher must ask when planning for assessment for learning: Why am I assessing? If the intent of assessment is to enhance student learning teachers use assessment for learning to uncover what students believe to be true and to learn more about the connections students are making, their prior knowledge, preconceptions, gaps, and learning styles. This information is used to inform and differntiate instruction to build on what students already know and to challenge students when their are problems inhibiting progression to the next stages of learning. Teachers use this information to provide their students with descriptive feedback that will further their learning and not as a sumamtive assessment or to report a grade. What am I assessing? Assessment for learning requires ongoing assessment of the outcomes that comprise the intended learning. In most cases these are the curriculum outcomes. Teachers create assessments that will expose students’ thinking and skills in relation to the intended learning, and the common preconceptions. What assessment method should I use? When planning assessment for learning, the teacher must think about what assessment is designed to expose, and must decide which assessment approaches are most likely to give detailed information about what each student is thinking and learning. The methods need to incorporate a variety of ways for students to demonstrate their learning. For example, having students complete tasks orally or through visual representation allow those who are struggling with reading or writing to demonstrate their learning. How can I use the information? The information collected in assessment for learning is used to report to the student and by offering descriptive, on time feedback and to provide the teacher with information to allow for changes in instruction for individual students or groups of students. Feedback for the Student Black and Wiliam (1998) suggested that feedback was a key component in assessment for learning. Cooper (2007) and Davies (2000a) assert that the quality of the feedback matters, as well as the timing. Quality feedback is descriptive feedback. Descriptive feedback makes it clear for the learning what is working and what needs to be worked on. Allowing students to adjust or change what they are doing through descriptive feedback, students are more likely to be successful (Davies, 2000a). In her book Making Classroom Assessment Work, Anne Davies (2000b) tells us that descriptive feedback that supports learning, and: - comes during, as well as after, the learning - is easily understandable and related directly to the learning - is specific - so performance can improve - involves choice on the part of the learner as to what and how to receive feedback - is part of an ongoing conversation about learning - is in comparison to models, exemplars or descriptions - is about the performance or the work, not the person Feedback for learning is an integral part of the teaching process. It is the vital link between the teacher’s assessment of a student’s learning and the action following that assessment. Immediate feedback is key maximizing student learning. Cooper (2007) and Gregory et al (1997) provide many examples of feedback that is easy and fast. In their book Setting and Using Criteria, Gregory et al (1997) provide ten quick ways that a teacher can give immediate feedback to guide the student's learning, without putting a mark in the gradebook. One such example is Met/Not Yet/I Noticed. This technique gives the student immediate feedback when the criteria is set up in a ruberic and the teacher is simply checking off Met or Not Yet and giving descriptive feedback in the I Noticed column. Descriptive feedback makes explicit connections between students’ thinking and the learning that is expected. It addresses misinterpretations and lack of understanding. Feedback should help identify the next steps and an example of what good work looks like (Davies, 2000b). Feedback for learning will support or challenge an idea that a student holds. It allows the teacher to provide recognition for achievement and growth, and to give precise directions for improvement. Good descriptive feedback should also cause students to think about, and allow them to respond to, the teacher's or peer's suggestions. With a criterion-referenced standard the student's performance is measured against a predetermined set of performance indicators. We commonly see this type of assessment outside of the school setting. For example, when a coach is teaching a new skill to an athlete and a driver's road test. Performance standards and ruberics are becoming more and more common is the educational setting, as teachers see the merit in allowing the students to know what the criteria is before they begin the task (Cooper, 2007). Another technique that is often employed is including students in setting the criteria. This increases student buy-in and makes them accountable to the standards they have set themselves. Assessment for learning provides information about what students can already do and what they are not able to do (Gregory, et al 1997). It is measured using a predetermined set of exemplars. This must be shared with the students before they begin the work. After the students have done their work, their tasks can be measured using the criteria that was set our for that task. This provides the necessary information for the teacher to create the next steps in instruction. Students will be in various places in their learning and by using assessment for learning the teacher can compare the student progress with the intended objectives and adjust the pace of instruction, the resources, or the amount of work required in order to lesson confusion and frustration on the part of the student (Davis, 2000b). By focusing on what the student does know and moving forward the learner is being supported rather than criticized. By knowing the criteria ahead of time, frustration is decreased and the student has a sense of ownership over their own work. Using performance standards and exemplars also decreases the frustration level of the students because they are able to understand how the criteria is being presented (Gregory, et al, 1997). Setting and Using Criteria In Setting And Using Criteria, Gregory et al (1997) outline some effective approaches to establishing criteria in the classroom. Teachers can set criteria on their own, or include students in deciding what will be valued for that particular task. Criteria should always come out of the learning outcomes set for that grade. Criteria should be set for projects and assignments and does not have to be set for day to day tasks. By setting the criteria, the teacher is outlining how the task will be valued or judged. Gregory et al (1997) have found that the "following four step process for creating criteria with students encourages student participation, understanding, and ownership". Step One: Brainstorm Step Two: Sort and catagorize Step Three: Make and post a T-chart Step Four: Add, revise, refine This process can be done with the student's using something very familiar to start with, so that they learn how to set criteria. After some amount of practice, the students become faster at pulling out the major ideas or skills that you want them to know or do. Planning with Assessment in Mind Grant Wiggins and Jay McTighe (1998) coined the idea of backward design planning with their book, "Understanding by Design". The focus of their work was to state that if learning was to be effective for the students, the teacher must begin with the final destination in mind, and that the programs or activities must be 'backward in design' (Wiggins & McTighe, 1998). Designing curriculum this way has been described as backward because teachers traditionally start curriculum planning with interesting activities and textbooks in mind, rather than starting with the big ideas or goals they want the students to master (Wiggins & McTighe, 1998). Teachers should be clear about what learning targets or goals will be set for the students and what formative and culminating assessments will be used to provide evidence that the students have mastered those targets or goals. The students need to be informed what the assessments will be along the way and for the final culmination, so that they have a clear sense of what their goals need to be. Students should also be given the reasons why each assessment will be looked at, so that they will understand what is being asked of them and when (Wiggins & McTighe, 1998). Teachers begin with the end in mind, and set the task to reflect the learning. Teachers should inform the students about the big ideas and essential questions, the performance requirements, and the evaluative criteria at the beginning of the unit or course. The students should be able to describe these goals (big ideas and essential questions) of the unit or course. This helps to ensure that the students are aware of the expectations and optimal learning takes place. "To begin with the end in mind means to start with a clear understanding of your destination. It means to know where you’re going so that you better understand where you are now so that the steps you take are always in the right direction." (Covey, 1990) Created by Kirsten O'Coin (2016) Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment [Electronic version]. Phi Delta Kappan, 80(2). 139-44. 32 Cooper, D. (2010). Talk About Assessment: High School Strategies and Tools. Nelson/Thomson, Canada Ltd., Toronto. Cooper, D. (2007). Talk about assessment: Strategies and tools to improve learning. Nelson/Thomson, Canada Ltd., Toronto. Covey, S. (1990). The Seven Habits of Highly Effective People. New York: Fireside. Davies, A. (2000a). Feed Back...Feed Forward: Using assessment to boost literacy learning. Online Journal © 2003 Classroom Connections International - www.connect2learning.com (Originally published in Primary Leadership. Vol.2 No. 3 Spring Issue (2000) p.53-55). Davies, A. (2000b). Making Classroom Assessment Work. Courtney, BC: Classroom Connections. Gregory, K., Cameron, C. & Davies, A. (2000). Self-Assessment and Goal-Setting. Merville, BC: Connections Publishing. Gregory, K., Cameron, C. & Davies, A. (1997). Setting and Using Criteria. Merville, BC: Connections Publishing. Koehn. (2008). Together is better (BCTF Teacher Inquirer). Retrieved February 18, 2009 from Website: http://bctf.ca/uploadedFiles/Publications/TeacherInquirer/archive/2008-09/2008-10/Koehn.pdf 53 Rethinking Classroom Assessment: Assessment for Learning, Assessment as Learning, Assessment of Learning - new publication of the Western and Northern Canadian Protocol (WNCP). Retrieved March 1, 2009 from: http://www.aac.ab.ca/public/rethinking.pdf Shepard, L. A. (2000). The role of assessment in a learning culture [Electronic version]. Educational Researcher, 29(7). 4-14. 32 Stiggins, R. (2002). "Assessment Crisis: The Absence of Assessment FOR Learning.” Phi Delta Kappan, 83 (10), 758–765 Stiggins, R. (2005). "From Formative Assessment to Assessment FOR Learning: A Path to Success in Standards-Based Schools." Phi Delta Kappan, 87,(4) Wiggins, G. (1993). Assessing Student Performance: Exploring the purpose and limits of testing. San Fransisco, CA: Jossey-Bass. Wiggins, G. & McTighe, J. (1998) Understanding by Design. Alexandria, VA:ASCD.
http://etec.ctlt.ubc.ca/510wiki/Assessment_for_Learning
4.1875
Earth and Mars are two of the rocky terrestrial planets that orbit within the inner Solar System. In some ways they are very similar, but in other ways, they couldn’t be more different. Let’s take a look at Earth and Mars and consider their similarities and differences. The origin of Earth and Mars is the same for all of the planets in the Solar System. Scientists believe that the Sun, the planets and everything in the Solar System all formed at the same time within the solar nebula; a giant cloud of cold molecular hydrogen. The Sun formed in the middle of the Solar System, and its rapid rotation created a flattened disk of material surrounding it. It was from within this disk that Earth and Mars and all the planets formed. Both Earth and Mars are terrestrial planets, made up of rock and metal. We know quite a bit about the internal structure of Earth, but can only guess at the structure of Mars. Scientists think that both planets have an inner core of metal surrounded by a mantle of rock. A thin crust covers the mantle. Since Mars is a smaller world, it’s believed that it cooled faster, so it’s crust is thicker. Mars lacks a magnetic field, so it probably doesn’t have a rotating liquid metal core like we have here on Earth. Unlike Earth, Mars has no plate tectonics. Perhaps it was because of this rapid cooling, but the crust on Mars is one thick shell surrounding the entire planet. While plate tectonics are constantly resurfacing planet Earth, we can see that the surface of Mars is ancient, hammered by thousands of meteorite impacts. The lack of plate tectonics also allowed hotspots in the mantle to remain in the same position for billions of years. This was how the largest volcanoes in the Solar System, such as Olympus Mons, could get so big. Mars is small. At only 6,792 km across, it’s about half the diameter of Earth, and has only 10% of the Earth’s mass. This means that it has a much lower surface gravity than Earth. If you could stand on the surface of Mars, you would only experience about 1/3rd the gravity you have on Earth. You would be able to jump 3 times higher than you could on Earth. Earth and Mars also share water. But here on Earth, water is everywhere; the oceans account for 71% of the surface area of the planet. Mars looks dry and dusty, but there are vast deposits of water ice at the planet’s northern and southern poles. It’s thought that Mars had large quantities of water on its surface billions of years ago; there’s evidence of flooding and ancient river valleys. But that water is long gone. What water remains is locked as ice underneath the ground. Of course, the biggest difference between Earth and Mars is life. Here on Earth, life is everywhere – you can see the green forests from space. Mars looks dry and dusty, and no spacecraft sent to Mars have found any life. Scientists think there could be life hiding underground with water, or inside rocks, but nothing has been found so far. NASA Solar System Exploration: Earth-Mars Comparison Chart NASA: Earth and Mars
http://www.universetoday.com/22677/earth-and-mars/
4
Black Hole Science To begin to understand Hawking's contributions, we must look at the tiny, subatomic particles that comprise everything in our universe. For instance, particle pairs are constantly appearing and disappearing together. In every pair, there's a particle and an antiparticle with the opposite properties, like a proton and its corresponding antiparticle, the antiproton. Without being interrupted, these particles simultaneously appear, cancel each other out and disappear as fast as they arrived. But Hawking wondered what would happen to these particles if they were face-to-face with a black hole. In the 1970s, Hawking put forth the idea that a black hole likely sucks in one particle -- usually the antiparticle -- while allowing the other particle to escape. According to Hawking's theory, it's this leftover particle at the entrance of a black hole that ends up being emitted as a type of radiation called Hawking radiation. He hypothesizes that since the other particle falls into the black hole, it helps reduce the mass of the black hole by that incremental amount. Over time, black holes decrease in mass and collapse, resulting in a huge explosion that spits out matter throughout space. In a sense, Hawking's work suggests there's a lot more going on in black holes and the areas around them. Since researchers have to measure the presence of black holes indirectly, it's been difficult to confirm or counter Hawking's theory. One group created a miniature black hole of sorts in a lab and observed that Hawking radiation could be real [source: Shiga]. Still, others think the theory can only be confirmed by evidence from a real black hole. Then followed the information paradox -- the debate surrounding what happened to qualities of matter once inside a black hole. Hawking wasn't afraid to be bold in drawing conclusions. In 1997, he made a bet with colleague John Preskill, arguing that information is permanently lost once it falls into a black hole [source: Hogan]. But in 2004, Hawking admitted that information isn't lost or channeled into another universe, but rather it seeps back into the existing universe in distorted form. Hawking publicly admitting defeat confirms his attitude toward science as a field that's constantly adding to and correcting itself. His work on black holes might not seem particularly groundbreaking at first glance, but it spurred conversation that might not have taken place otherwise What does Hawking think about the origins of the universe? Find out on the next page.
http://science.howstuffworks.com/dictionary/famous-scientists/physicists/stephen-hawking2.htm
4.0625
Library of Congress In its broadest sense, impeachment is the process by which public officials may be removed from office on the basis of their conduct. Strictly speaking, it is the decision by a legislature to accuse an official of one or more offenses that warrant removal according to constitutional standards. A vote to impeach then triggers a trial based on those charges. The most famous impeachment proceedings have involved presidents, but every state has its own procedures. Most follow the federal model in general, but vary widely in their specifics. At the federal level, impeachment starts in the House of Representatives, where members may initiate resolutions to impeach a sitting president. The House Judiciary Committee decides if a resolution merits a formal impeachment inquiry. A simple majority vote in the full House can launch a formal inquiry. The House Judiciary Committee conducts an investigation to determine if allegations against a president warrant charges, or articles of impeachment. If a simple majority of the full House votes to charge a president with at least one article of impeachment, that indictment will move to the Senate for trial. At that point, the president has been "impeached" by the House. House members act as or appoint congressional prosecutors. The chief justice of the Supreme Court presides over the trial in the Senate chamber. A two-thirds vote is required to convict and remove from office. The U.S. Constitution states that, "The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors." (Article II, Section ). The House of Representatives has impeached two Presidents: Andrew Johnson and Bill Clinton. Johnson was charged in 1868 with eight articles of impeachment, but was acquitted by a single vote in the Senate trial (depicted in the above engraving). Bill Clinton was charged with four articles of impeachment by the House in 1998, but was acquitted by the Senate early the next year. Richard Nixon resigned in 1974 before a final vote in the full House could send him to trial on three articles of impeachment. Each state constitution outlines a unique impeachment procedure, including variations on the list of impeachable offenses, protocol for an impeachment trial and the body responsible for an initial investigation. According to the Associated Press, seven governors in U.S. history have been removed from office following impeachment proceedings. The National Conference of State Legislatures said that a longer list would include states that have investigated governors for alleged offenses, voted to impeach a governor ahead of a trial, or held trials that resulted in acquittal. The only governor to be removed from office in the last 80 years was Gov. Evan Mecham of Arizona, who was convicted in 1988 of obstructing justice and misusing $80,000 in state money that he was charged with funneling to his car dealership to keep it afloat. In January 2008, the Illinois House of Representatives voted 114-1 to impeach Gov. Rod Blagojevich for abuse of power in connection with the federal investigation that had led to his arrest the month before. Mr. Blagojevich was charged with trying to sell the Senate seat vacated by Barack Obama and with seeking to extort campaign contributions in return for official actions, including providing reimbursement to a hospital. Following the process that has been generally adopted by state legislatures in recent decades, the Illinois House created a special investigative committee, which made a recommendation in favor of impeachment to the entire body. In all states except Alaska, Nebraska and Oregon, the House votes on articles of impeachment ahead of a trial. In Alaska, the process is reversed, according to The Book of States. That state's Senate must impeach a governor by voting on impeachment articles in order to initiate a trial in the House. Nebraska is the only state with a unicameral legislature. Without a state House, the Nebraska Senate votes to impeach before passing articles on to the state Supreme Court for a trial. Oregon is the only state without constitutional provisions for impeachment of a governor or other executive and judicial officers, according to the NCSL. Those officials may be removed from office, but not by the state's legislature. State courts in Oregon may try public officials for criminal offenses, but the procedure depends upon the jurisdiction of a crime. Nebraska convicted and dismissed from office its first governor in 1871, but has not fully impeached any governor since. Alaska has never removed a governor from office. — Rebecca Cathcart, Jan 12, 2008 ARTICLES ABOUT IMPEACHMENT Facing an impeachment effort, Gov. Paul LePage said that he would skip the traditional State of the State speech to a joint session of the Legislature. January 13, 2016, Wednesday Anger over the stuttering economy and political scandals is fueling calls for the ouster of Ms. Rousseff, who has three years to go in her presidential term. December 14, 2015, Monday A pesar de una crisis económica y varios escándalos de corrupción, el juicio político de Dilma Rousseff acapara la atención de los brasileros. December 4, 2015, Friday A faltering economy, an environmental disaster and street protests take a back seat as a powerful lawmaker, Eduardo Cunha — himself facing corruption charges — tries to oust the president. December 4, 2015, Friday The move against Ms. Rousseff by Eduardo Cunha, the House speaker, who is himself battling charges in a bribery scheme, opens a new phase of uncertainty in Brazil. December 3, 2015, Thursday Vice President Ahmed Adeeb will be charged with terrorism in what the government called a plot to kill the president, a minister said. November 6, 2015, Friday House Republicans may try to impeach John Koskinen, the commissioner of the Internal Revenue Service, though the specifics of any supposed impeachable offenses are vague. October 19, 2015, Monday A judicial panel’s suggestion that Judge Fuller, who resigned after an arrest, could still be impeached was seen as a broader message that the judiciary was embracing a harder line. September 19, 2015, Saturday Parliament in the Maldives voted overwhelmingly Tuesday to impeach Vice President Mohamed Jameel after accusing him of dereliction of duties. July 22, 2015, Wednesday The ousted prime minister voiced confidence in her innocence in connection with a plan to prop up rice prices, which her opponents called a waste of money. May 20, 2015, Wednesday Ahead of a summer break, House Republicans have moved forward with plans to sue President Obama. John A. Boehner, the House speaker, said he intended to file a lawsuit accusing President Obama of failing to carry out laws passed by Congress. For the past year, the 32-year-old mayor of Ellisville, Mo., has been at war with its City Council, which is now trying to remove him from office. Gov. Rod Blagojevich proclaimed his innocence before the Illinois Senate voted to impeach him. (Video: MSNBC)
http://topics.nytimes.com/top/reference/timestopics/subjects/i/impeachment/index.html?query=Ahmadinejad,%20Mahmoud&field=per&match=exact
4
Freedom of the press Freedom of the press or freedom of the media is the freedom of communication and expression through mediums including various electronic media and published materials. While such freedom mostly implies the absence of interference from an overreaching state, its preservation may be sought through constitutional or other legal protections. With respect to governmental information, any government may distinguish which materials are public or protected from disclosure to the public based on classification of information as sensitive, classified or secret and being otherwise protected from disclosure due to relevance of the information to protecting the national interest. Many governments are also subject to sunshine laws or freedom of information legislation that are used to define the ambit of national interest. The United Nations' 1948 Universal Declaration of Human Rights states: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference, and impart information and ideas through any media regardless of frontiers" This philosophy is usually accompanied by legislation ensuring various degrees of freedom of scientific research (known as scientific freedom), publishing, press and printing the depth to which these laws are entrenched in a country's legal system can go as far down as its constitution. The concept of freedom of speech is often covered by the same laws as freedom of the press, thereby giving equal treatment to spoken and published expression. - 1 Relationship to self-publishing - 2 Status of press freedom worldwide - 3 History - 4 Implications of new technologies - 5 Organizations for press freedom - 6 See also - 7 Notes - 8 References - 9 External links Relationship to self-publishing Freedom of the press is construed as an absence of interference by outside entities, such as a government or religious organization, rather than as a right for authors to have their works published by other people. This idea was famously summarized by the 20th century American journalist, A. J. Liebling, who wrote, "Freedom of the press is guaranteed only to those who own one". Freedom of the press gives the printer or publisher exclusive control over what the publisher chooses to publish, including the right to refuse to print anything for any reason. If the author cannot reach a voluntary agreement with a publisher to produce the author's work, then the author must turn to self-publishing. Status of press freedom worldwide Beyond legal definitions, several non-governmental organizations use other criteria to judge the level of press freedom around the world: - Reporters Without Borders considers the number of journalists murdered, expelled or harassed, and the existence of a state monopoly on TV and radio, as well as the existence of censorship and self-censorship in the media, and the overall independence of media as well as the difficulties that foreign reporters may face. - The Committee to Protect Journalists (CPJ) uses the tools of journalism to help journalists by tracking press freedom issues through independent research, fact-finding missions, and firsthand contacts in the field, including local working journalists in countries around the world. CPJ shares information on breaking cases with other press freedom organizations worldwide through the International Freedom of Expression Exchange, a global e-mail network. CPJ also tracks journalist deaths and detentions. CPJ staff applies strict criteria for each case; researchers independently investigate and verify the circumstances behind each death or imprisonment. - Freedom House likewise studies the more general political and economic environments of each nation in order to determine whether relationships of dependence exist that limit in practice the level of press freedom that might exist in theory. So the concept of independence of the press is one closely linked with the concept of press freedom. Worldwide press freedom index Every year, Reporters Without Borders establishes a ranking of countries in terms of their freedom of the press. The worldwide Press Freedom Index list is based on responses to surveys sent to journalists that are members of partner organisations of the RWB, as well as related specialists such as researchers, jurists and human rights activists. The survey asks questions about direct attacks on journalists and the media as well as other indirect sources of pressure against the free press, such as non-governmental groups. RWB is careful to note that the index only deals with press freedom, and does not measure the quality of journalism. In 2011–2012, the countries where press was the most free were Finland, Norway and Germany, followed by Estonia, Netherlands, Austria, Iceland, and Luxembourg. The country with the least degree of press freedom was Eritrea, followed by North Korea, Turkmenistan, Syria, Iran, and China. Freedom of the Press Freedom of the Press is a yearly report by US-based non-governmental organization Freedom House, measuring the level of freedom and editorial independence enjoyed by the press in every nation and significant disputed territories around the world. Levels of freedom are scored on a scale from 1 (most free) to 100 (least free). Depending on the basics, the nations are then classified as "Free", "Partly Free", or "Not Free". According to Reporters Without Borders, more than a third of the world's people live in countries where there is no press freedom. Overwhelmingly, these people live in countries where there is no system of democracy or where there are serious deficiencies in the democratic process. Freedom of the press is an extremely problematic problem/concept for most non-democratic systems of government since, in the modern age, strict control of access to information is critical to the existence of most non-democratic governments and their associated control systems and security apparatus. To this end, most non-democratic societies employ state-run news organizations to promote the propaganda critical to maintaining an existing political power base and suppress (often very brutally, through the use of police, military, or intelligence agencies) any significant attempts by the media or individual journalists to challenge the approved "government line" on contentious issues. In such countries, journalists operating on the fringes of what is deemed to be acceptable will very often find themselves the subject of considerable intimidation by agents of the state. This can range from simple threats to their professional careers (firing, professional blacklisting) to death threats, kidnapping, torture, and assassination. Reporters Without Borders reports that, in 2003, 42 journalists lost their lives pursuing their profession and that, in the same year, at least 130 journalists were in prison as a result of their occupational activities. In 2005, 63 journalists and 5 media assistants were killed worldwide. Examples include: - The Lira Baysetova case in Kazakhstan. - The Georgiy R. Gongadze case in Ukraine - In Nepal, Eritrea, and mainland China, journalists may spend years in jail simply for using the "wrong" word or photo. Regions closed to foreign reporters - Chechnya, Russia - Ogaden, Ethiopia - Jammu & Kashmir, India - Waziristan, Pakistan - Agadez, Niger - North Korea Central, Northern and Western Europe has a long tradition of freedom of speech, including freedom of the press. After World War II, Hugh Baillie, the president of United Press wire service based in the U.S., promoted freedom of news dissemination. In 1966 he called for an open system of news sources and transmission, and minimum of government regulation of the news. His proposals were aired at the Geneva Conference on Freedom of Information in 1948, but were blocked by the Soviets and by France. Until 1694, England had an elaborate system of licensing; the most recent was seen in the Licensing of the Press Act 1662. No publication was allowed without the accompaniment of a government-granted license. Fifty years earlier, at a time of civil war, John Milton wrote his pamphlet Areopagitica. In this work Milton argued forcefully against this form of government censorship and parodied the idea, writing "when as debtors and delinquents may walk abroad without a keeper, but unoffensive books must not stir forth without a visible jailer in their title." Although at the time it did little to halt the practice of licensing, it would be viewed later a significant milestone as one of the most eloquent defences of press freedom. Milton's central argument was that the individual is capable of using reason and distinguishing right from wrong, good from bad. In order to be able to exercise this ration right, the individual must have unlimited access to the ideas of his fellow men in “a free and open encounter." From Milton's writings developed the concept of the open marketplace of ideas, the idea that when people argue against each other, the good arguments will prevail. One form of speech that was widely restricted in England was seditious libel, and laws were in place that made criticizing the government a crime. The King was above public criticism and statements critical of the government were forbidden, according to the English Court of the Star Chamber. Truth was not a defense to seditious libel because the goal was to prevent and punish all condemnation of the government. Locke contributed to the lapse of the Licensing Act in 1695, whereupon the press needed no license. Still, many libels were tried throughout the 18th century, until "the Society of the Bill of Rights" led by John Horne Tooke and John Wilkes organised a campaign to publish Parliamentary Debates. This culminated in three defeats of the Crown in the 1770 cases of Almon, of Miller and of Woodfall, who all had published one of the Letters of Junius, and the unsuccessful arrest of John Wheble in 1771. Thereafter the Crown was much more careful in the application of libel; for example, in the aftermath of the Peterloo Massacre, Burdett was convicted, whereas by contrast the Junius affair was over a satire and sarcasm about the non-lethal conduct and policies of government. In Britain's American colonies, the first editors discovered their readers enjoyed it when they criticized the local governor; the governors discovered they could shut down the newspapers. The most dramatic confrontation came in New York in 1734, where the governor brought John Peter Zenger to trial for criminal libel After the publication of satirical attacks. The defense lawyers argued that according to English common law, truth was a valid defense against libel. The jury acquitted Zenger, who became the iconic American hero for freedom of the press. The result was an emerging tension between the media and the government. By the mid-1760s, there were 24 weekly newspapers in the 13 colonies, and the satirical attack on government became common features in American newspapers. John Stuart Mill in 1869 in his book On Liberty approached the problem of authority versus liberty from the viewpoint of a 19th-century utilitarian: The individual has the right of expressing himself so long as he does not harm other individuals. The good society is one in which the greatest number of persons enjoy the greatest possible amount of happiness. Applying these general principles of liberty to freedom of expression, Mill states that if we silence an opinion, we may silence the truth. The individual freedom of expression is therefore essential to the well-being of society. Mill wrote: - If all mankind minus one, were of one opinion, and one, and only one person were of the contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind. Between September 4, 1770 and October 7, 1771 the kingdom of Denmark–Norway had the most unrestricted freedom of press of any country in Europe. This occurred during the regime of Johann Friedrich Struensee, whose second act was to abolish the old censorship laws. However, due to the great amount of mostly anonymous pamphlets published that was critical and often slanderous towards Struensee's own regime, he reinstated some restrictions regarding the freedom of press a year later, October 7, 1771. After the Italian unification in 1861, the Albertine Statute of 1848 was adopted as the constitution of the Kingdom of Italy. The Statute granted the freedom of the press with some restrictions in case of abuses and in religious matters, as stated in Article 28: - "The press shall be free, but the law may suppress abuses of this freedom. However, Bibles, catechisms, liturgical and prayer books shall not be printed without the prior permission of the Bishop." After the abolition of the monarchy in 1946 and the abrogation of the Statute in 1948, the Constitution of the Republic of Italy guarantees the freedom of the press, as stated in Article 21, Paragraphs 2 and 3: - "The press may not be subjected to any authorisation or censorship. Seizure may be permitted only by judicial order stating the reason and only for offences expressly determined by the law on the press or in case of violation of the obligation to identify the persons responsible for such offences." The Constitution allows the warrantless confiscation of periodicals in cases of absolute urgency, when the Judiciary cannot timely intervene, on the condition that a judicial validation must be obtained within 24 hours. Article 21 also gives restrictions against those publications considered offensive by public morality, as stated in Paragraph 6: - "Publications, performances, and other exhibits offensive to public morality shall be prohibited. Measures of preventive and repressive measure against such violations shall be established by law." Nazi Germany (1933–1945) In 1933 Freedom of the Press was suppressed in Hitler's Germany by the Reichstag Fire Decree of President Paul Von Hindenburg, just as Adolf Hitler was coming to power. Hitler largely suppressed freedom of the press through Joseph Goebbels' Ministry of Public Enlightenment and Propaganda. The Ministry acted as a central control-point for all media, issuing orders as to what stories could be run and what stories would be suppressed. Anyone involved in the film industry—from directors to the lowliest assistant—had to sign an oath of loyalty to the Nazi Party, due to opinion-changing power Goebbels perceived movies to have. (Goebbels himself maintained some personal control over every single film made in Nazi Europe.) Journalists who crossed the Propaganda Ministry were routinely imprisoned. Freedom of speech was first confirmed in Poland by the King Casimir the Great in 1347 in Wiślica Statutes. First polish newspapers were written, later replaced by print. The oldest Polish regular newspaper was pro-royal Merkuriusz Polski Ordynaryjny, founded in 1661. Later laws expanded freedom of the press further (i.e. March Constitution). One of the world's first freedom of the press acts was introduced in Sweden in 1766, mainly due to classical liberal member of parliament Anders Chydenius. Excepted and liable to prosecution was only vocal opposition to the King and the Church of Sweden. The Act was largely rolled back after King Gustav's coup d'état in 1772, restored after the overthrowing of his son, Gustav IV of Sweden in 1809, and fully recognized with the abolishment of the king's prerogative to cancel licenses in the 1840s. Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press... The Indian Constitution, while not mentioning the word "press", provides for "the right to freedom of speech and expression" (Article 19(1) a). However this right is subject to restrictions under sub clause (2), whereby this freedom can be restricted for reasons of "sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, preserving decency, preserving morality, in relation to contempt, court, defamation, or incitement to an offense". Laws such as the Official Secrets Act and Prevention of Terrorist Activities Act (PoTA) have been used to limit press freedom. Under PoTA, person could be detained for up to six months for being in contact with a terrorist or terrorist group. PoTA was repealed in 2006, but the Official Secrets Act 1923 continues. For the first half-century of independence, media control by the state was the major constraint on press freedom. Indira Gandhi famously stated in 1975 that All India Radio is "a Government organ, it is going to remain a Government organ..." With the liberalization starting in the 1990s, private control of media has burgeoned, leading to increasing independence and greater scrutiny of government. It ranks poorly at 140th rank out of 179 listed countries in the Press Freedom Index 2013 released by Reporters Without Borders (RWB). Analytically India's press freedom, as could be deduced by the Press Freedom Index, has constantly reduced since 2002, when it culminated in terms of apparent freedom, achieving a rank of 80 among the reported countries. Implications of new technologies Many of the traditional means of delivering information are being slowly superseded by the increasing pace of modern technological advance. Almost every conventional mode of media and information dissemination has a modern counterpart that offers significant potential advantages to journalists seeking to maintain and enhance their freedom of speech. A few simple examples of such phenomena include: - Satellite television versus terrestrial television: Whilst terrestrial television is relatively easy to manage and manipulate, satellite television is much more difficult to control as journalistic content can easily be broadcast from other jurisdictions beyond the control of individual governments. An example of this in the Middle East is the satellite broadcaster Al Jazeera. This Arabic-language media channel operates out of Qatar, whose government is relatively liberal with respect to many of its neighboring states. As such, its views and content are often problematic to a number of governments in the region and beyond. However, because of the increased affordability and miniaturisation of satellite technology (e.g. dishes and receivers) it is simply not practicable for most states to control popular access to the channel. - Web-based publishing (e.g., blogging) vs. traditional publishing: Traditional magazines and newspapers rely on physical resources (e.g., offices, printing presses) that can easily be targeted and forced to close down. Web-based publishing systems can be run using ubiquitous and inexpensive equipment and can operate from any global jurisdiction. To get control over web publications, nations and organisations are using geolocation and geolocation software. - Voice over Internet protocol (VOIP) vs. conventional telephony: Although conventional telephony systems are easily tapped and recorded, modern VOIP technology can employ low-cost strong cryptography to evade surveillance. As VOIP and similar technologies become more widespread they are likely to make the effective monitoring of journalists (and their contacts and activities) a very difficult task for governments. Naturally, governments are responding to the challenges posed by new media technologies by deploying increasingly sophisticated technology of their own (a notable example being China's attempts to impose control through a state-run internet service provider that controls access to the Internet) but it seems that this will become an increasingly difficult task as journalists continue to find new ways to exploit technology and stay one step ahead of the generally slower-moving government institutions that attempt to censor them. In May 2010, U.S. President Barack Obama signed legislation intended to promote a free press around the world, a bipartisan measure inspired by the murder in Pakistan of Daniel Pearl, the Wall Street Journal reporter, shortly after the September 11 attacks in 2001. The legislation, called the Daniel Pearl Freedom of the Press Act, requires the United States Department of State to expand its scrutiny of news media restrictions and intimidation as part of its annual review of human rights in each country. In 2012 the Obama Administration collected communication records from 20 separate home and office lines for Associated Press reporters over a two-month period, possibly in an effort to curtail government leaks to the press. The surveillance caused widespread condemnation by First Amendment experts and free press advocates, and led 50 major media organizations to sign and send a letter of protest to American attorney general Eric Holder. Organizations for press freedom - Article 19 - Canadian Journalists for Free Expression - The Committee to Protect Journalists - Electronic Frontier Foundation - Freedom House - Index on Censorship - Inter American Press Association - International Freedom of Expression Exchange - International Press Institute - Media Legal Defence Initiative - OSCE Representative on Freedom of the Media - Reporters Without Borders - Student Press Law Center - World Association of Newspapers and News Publishers - World Press Freedom Committee - Worldwide Governance Indicators |Part of a series on| |Censorship by country| - Article 10 of the European Convention on Human Rights - Areopagitica: a speech of Mr John Milton for the liberty of unlicensed printing to the Parliament of England - Chilling effect (term) - Cohen v. Cowles Media Co. — a ruling in the USA that a reporter's promise of a source's confidentiality may be enforced in court. - Declaration of Windhoek (1991) - Editorial independence - Free Speech, "The People’s Darling Privilege" - First Amendment to the United States Constitution - Freedom of speech - Freedom of the Press Act (1766) - Freedom of the Press (report) - Freedom of the press in the Russian Federation - Freedom of the press in the United States - Freedom of the press in Ukraine - Free speech in the media during the 2011 Libyan civil war - Gag order - International Freedom of Expression Exchange — "The largest online archive of information on press freedom violations", dating back to 1995 and covering more than 120 countries. - Journalism ethics and standards - Journaliste en danger - Journalistic standards - List of indices of freedom - Media blackout - Media transparency - News embargo - Section Two of the Canadian Charter of Rights and Freedoms - Photography is Not a Crime - Prior restraint - State media - Tunisia Monitoring Group - Virginia Declaration of Rights - World Press Freedom Day on May 3 - Worldwide Press Freedom Index - John Peter Zenger - Powe, L. A. Scot (1992). The Fourth Estate and the Constitution: Freedom of the Press in America. University of California Press. ISBN 9780520913165. - "Press Freedom Index 2014", Reporters Without Borders, 11 May 2014 - Press Freedom Index 2011-2012", Reporters Without Borders - "Description: Reporters Without Borders". The Media Research Hub. Social Science Research Council. 2003. Retrieved 23 September 2012. - Freedom House (2005). "Press Freedom Table (Press Freedom vs. Democracy ranks)". Freedom of the Press 2005. UK: World Audit. Retrieved 23 September 2012. - "Editor's daughter killed in mysterious circumstances", International Freedom of Expression Exchange (IFEX), 2 July 2002 - "Ukraine remembers slain reporter", BBC News, 16 September 2004 - "Do journalists have the right to work in Chechnya without accreditation?". Moscow Media Law and Policy Center. March 2000. Retrieved 2008-09-06. - "India praises McCain-Dalai Lama meeting". Washington, D.C.: WTOPews.com. July 27, 2008. Retrieved 2008-09-06. - Landay, Jonathan S. (March 20, 2008). "Radical Islamists no longer welcome in Pakistani tribal areas". McClatchy Washington Bureau. Retrieved 2008-09-06. - Eleonora W. Schoenebaum, ed. (1978), Political Profiles: The Truman Years, pp 16–17, Facts on File Inc., ISBN 9780871964533. - "British Press Freedom Under Threat", Editorial, New York Times, 14 November 2013. Retrieved 19 November 2013. - Alison Olson, "The Zenger Case Revisited: Satire, Sedition and Political Debate in Eighteenth Century America", Early American Literature, vol.35 no.3 (2000), pp: 223-245. - John Stuart Mill (1867). On Liberty. p. 10. - Laursen, John Christian (January 1998). "David Hume and the Danish Debate about Freedom of the Press in the 1770s". Journal of the History of Ideas 59 (1): 167–172. doi:10.1353/jhi.1998.0004. JSTOR 3654060. - "Lo Statuto Albertino" (PDF). The official website of the Presidency of the Italian Republic. - "The Italian Constitution" (PDF). The official website of the Presidency of the Italian Republic. - Jonathon Green and Nicholas J. Karolides, eds. (2009). Encyclopedia of Censorship. Infobase Publishing. pp. 194–96. - Skąd się wzięły gazety? - "The Freedom of the Press Act", Sveriges Riksdag - "The Swedish tradition of freedom of press" - "The World's First Freedom of Information Act (Sweden/Finland 1766)" - freedominfo.org, "Sweden" - "The Prevention of Terrorism Act 2002". - "Freedom of the Press". PUCL Bulletin (People's Union for Civil Liberties). July 1982. - "Press Freedom Index 2013". Reporters Without Borders. - "U.S. to Promote Press Freedom". New York Times. 17 May 2010. - Hicken, Jackie (15 May 2013). "Journalists push back against Obama administration for seizure of Associated Press records". Deseret News. Retrieved 16 May 2013. - Savage, Charlie; Leslie Kaufman (13 May 2013). "Phone Records of Journalists Seized by U.S.". The New York Times. Retrieved 16 May 2013. - Gant, Scott (2007). We're All Journalists Now: The Transformation of the Press and Reshaping of the Law in the Internet Age. New York: Free Press. ISBN 0-7432-9926-4. - Gardner, Mary A. The Inter American Press Association: Its Fight for Freedom of the Press, 1926–1960 (University of Texas Press, 2014) - George, Cherian. Freedom from the Press: Journalism and State Power in Singapore (2012) - McDonald, Blair (2015). "Freedom of Expression Revisited: Citizenship and Journalism in the Digital Era". Canadian Journal of Communication 40 (1). - Molnár, Peter, ed. Freedom of Speech and Freedom of Information Since the Fall of the Berlin Wall (Central European University Press, 2014) - Nord, Lars W., and Torbjörn Von Krogh. "The Freedom of The Press or The Fear Factor? Analysing Political Decisions and Non-Decisions in British Media Policy 1990-2012." Observatorio (OBS*) (2015) 9#1 pp: 01-16. - Starr, Paul (2004). The Creation of the Media: Political Origins of Modern Communications. New York: Basic Books. ISBN 0-465-08193-2. - Stockmann, Daniela. Media Commercialization and Authoritarian Rule in China (2012) - Thierer, Adam & Brian Anderson (2008). A Manifesto for Media Freedom. New York: Encounter Books. ISBN 1-59403-228-9. - Wilke, Jürgen (2013). Censorship and Freedom of the Press. Leibniz Institute of European History (IEG). |Look up freedom of the press in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to Freedom of the press.| |Wikiquote has quotations related to: Freedom of the press| - Media Freedom Navigator Media Freedom Indices at a Glance - Risorse Etiche Publish and translate articles of independent journalists - the ACTivist Magazine - Paradox of media freedom in Pakistan - South East Europe Media Organisation - Banned Magazine, the journal of censorship and secrecy. - News and Free Speech — Newspaper Index Blog - Press Freedom - OSCE Representative on Freedom of the Media - MANA — the Media Alliance for New Activism - International Freedom of Expression Exchange — Monitors press freedom around the world - IPS Inter Press Service Independent news on press freedom around the world - The Reporters Committee for Freedom of the Press - Reporters Without Borders - Doha Center for Media Freedom - World Press Freedom Committee - Student Press Law Center - Union syndicale des journalistes CFDT - Mapping media freedom in Europe
https://en.wikipedia.org/wiki/Press_freedom
4.0625
Each fraction can be reduced to itís simplest form, in which the numerator and denominator are as small as possible. To simplify a fraction you have to first find the greatest common factor of the numerator and denominator. You then have to divide the numerator and denominator by the biggest possible integer (whole number). For example if you have 12/18. The numerator (12) and the denominator (18) are both divisible by 6 (so the largest number that goes exactly into both 12 and 18 is 6 and therefore the greatest common factor is 6). Therefore if you divide the numerator and denominator by 6, you can write the fraction as (12/6)/(18/6) = 2/3. You can also simplify a fraction in steps. So 12/18 can also be written as 6/9 (the numerator and denominator can both be divided by 2). However, both 6 and 9 can also be divided by 3, so you can write 6/9 or even 2/3. 2/3 is the simplest form. More info Simplify fractions
http://www.fractioncalculator-online.com/simplify-fractions/
4.125
Influenza, or flu, is a respiratory infection caused by a variety of flu viruses. The most familiar aspect of the flu is the way it can “knock you off your feet” as it sweeps through entire communities. The flu differs in several ways from the common cold, a respiratory infection also caused by viruses. For example, people with colds rarely get fevers or headaches or suffer from the extreme exhaustion that flu viruses cause. The Centers for Disease Control and Prevention (CDC) estimates that 10 to 20 percent of Americans come down with the flu during each flu season, which typically lasts from November to March. Children are two to three times more likely than adults to get sick with the flu, and children frequently spread the virus to others. Although most people recover from the illness, CDC estimates that in the United States more than 100,000 people are hospitalized and about 36,000 people die from the flu and its complications every year. Flu outbreaks usually begin suddenly and occur mainly in the late fall and winter. The disease spreads through communities creating an epidemic. During the epidemic, the number of cases peaks in about 3 weeks and subsides after another 3 or 4 weeks. Half of the population of a community may be affected. Because schools are an excellent place for flu viruses to attack and spread, families with school-age children have more infections than other families, with an average of one-third of the family members infected each year. IMPORTANCE OF FLU Besides the rapid start of the outbreaks and the large numbers of people affected, the flu is an important disease because it can cause serious complications. Most people who get the flu get better within a week (although they may have a lingering cough and tire easily for a while longer). For elderly people, newborn babies, and people with certain chronic illnesses, however, the flu and its complications can be life-threatening. You can get the flu if someone around you who has the flu coughs or sneezes. You can get the flu simply by touching a surface like a telephone or door knob that has been contaminated by a touch from someone who has the flu. The viruses can pass through the air and enter your body through your nose or mouth. If you’ve touched a contaminated surface, they can pass from your hand to your nose or mouth. You are at greatest risk of getting infected in highly populated areas, such as in crowded living conditions and in schools. If you get infected by the flu virus, you will usually feel symptoms 1 to 4 days later. You can spread the flu to others before your symptoms start and for another 3 to 4 days after your symptoms appear. The symptoms start very quickly and may include - Body aches - Dry cough - Sore throat - Stuffy nose Typically, the fever begins to decline on the second or third day of the illness. The flu almost never causes symptoms in the stomach and intestines. The illness that some call “stomach flu” is not influenza. Usually, health care providers diagnose the flu on the basis of whether it is epidemic in the community and whether the person’s complaints fit the current pattern of symptoms. Health care providers rarely use laboratory tests to identify the virus during an epidemic. Health officials, however, monitor certain U.S. health clinics and do laboratory tests to determine which type of flu virus is responsible for the epidemic. The main way to keep from getting flu is to get a yearly flu vaccine. You can get the vaccine at your doctor’s office or a local clinic, and in many communities at workplaces, supermarkets, and drugstores. You must get the vaccine every year because it changes. Scientists make a different vaccine every year because the strains of flu viruses change from year to year. Nine to 10 months before the flu season begins, they prepare a new vaccine made from inactivated (killed) flu viruses. Because the viruses are killed, they cannot cause infection. The vaccine preparation is based on the strains of the flu viruses that are in circulation at the time. It includes those A and B viruses (see section below on types of flu viruses) expected to circulate the following winter. Sometimes, an unpredicted new strain may appear after the vaccine has been made and distributed to doctor’s offices and clinics. Because of this, even if you do get the flu vaccine, you still may get infected. If you do get infected, however, the disease usually is milder because the vaccine still will give you some protection.
http://blackdoctor.org/492/the-flu/
4
Posted by Jman on Thursday, April 25, 2013 at 2:12pm. ******I WILL GIVE MY ANSWERS AFTER I POST THE QUESTIONS****** 1. Momentum is a measure of ____. how hard it is to stop an object the amount of matter in an object the tendency of an object to change its motion the amount of force acting on an object Use the following equation to answer question 2: acceleration (in meters/second2) = net force (in newtons) mass (in kilograms) 2. A 300-N force acts on a 25-kg object. The acceleration of the object is ____. 3. The statement "for every action, there is an equal but opposite reaction" is a statement of ____. the law of conservation of momentum Newton's first law Newton's second law Newton's third law 4. A fixed, single pulley that is used to lift a block does which one of the following? doubles the force required to lift the block decreases the force required to lift the block makes the block easier to lift by changing the direction of the force needed to lift it decreases the force required and changes the direction of the force required 5. A heat engine ____. changes mechanical energy to thermal energy changes mechanical energy into electrical energy changes thermal energy into mechanical energy changes kinetic energy into thermal energy 6. Inertia _____. depends on direction depends on momentum resists a change in motion of an object both a and b 7. When a toy truck collides into a toy car, the momentum of _____ is the same before and after the collision. the truck multiplied by the car the truck plus the car 8. Air resistance _____ as you move faster. remains the same 9. A slanted surface used to raise an object is a(n) ____. (1 point) 10. A bar that is free to pivot about a fixed point is a _____. 11. When two or more simple machines work together they are called a(n) _____. 12. In a diesel engine, the fuel is ignited by _____. ☺☺☺☺☺THAT'S ALL THE Q's☺☺☺☺☺ Additional Details added 2 mins ago - Science Check answer - Jman, Thursday, April 25, 2013 at 2:16pm Answer This Question More Related Questions - science - Words to use: Accelerate - Sliding - Brake - Friction - Slipping - ... - calculus help check and help - if an object with mass m is dropped from rest, ... - phsyical science - the weight of an object is a) the force of earths gravity b) ... - Physical science - Is it possible for an object to have more kinetic energy but ... - physics - A 3 kg object traveling 6m/s east has a perfectly elastic collilsion ... - Physics - Object 1 (mass=5.5 kg) traveling in the +x direction at a speed of 12... - OPERATING SYSTEM - Hi, I need help with these questions. I have tried to find ... - Physics URGENT - Object 1 (mass=6.4 kg) traveling in the +x direction at a speed... - physics - I had wrote that was because I was told that in the beginning of the ... - physics - a moving object collides with an object initially at rest. is it ...
http://www.jiskha.com/display.cgi?id=1366913531
4.125
What Is A Watershed? A watershed is defined as a topographically delineated area drained by a stream. It is also referred to as a catchment and extends ridge top to ridge top. The interactions of the terrestrial and aquatic environments within the watershed determine watershed health. Much of Oregon’s natural resource management is based on watersheds. Why? First, watersheds are natural boundaries. Watershed management organizes and guides land and other resource use to provide desired goods and services without adversely affecting soil and water resources. This type of management also recognizes the interrelationships among land use, soil, water and linkages between upland and downstream areas. Finally, watershed management is based on scientific method; local citizens identify and prioritize projects, develop action plans, implement projects, and evaluate.
http://www.calapooia.org/about/watershed-assessment/
4.25
One of the two most important types of volcanoes, shield volcanoes are large and broad and have relatively gentle slopes. Kilauea on the island of Hawaii is good example of a shield volcano. Eruptions on shield volcanoes are far less explosive than those on composite volcanoes. That is because the basalt lava that erupts from shield volcanoes contains less silica, SiO2, and is therefore less sticky (less viscous) and doesn't "plug up" the volcano. Because the lava is runnier, it travels further from the crater before it cools, causing the shield-like shape of the volcano as many eruptions build up over time. Image: The summit of La Cumbre, a shield volcano on Fernandina Island, Galapagos Islands, as seen from Earth orbit (credit: NASA/SPL) Brian Cox describes the Solar System's largest volcano. Professor Brian Cox describes the biggest volcano in the Solar System, Olympus Mons on Mars. Plate boundaries are places of chaos and mineral wealth. Professor Iain Stewart explains how hotspots are a good demonstration of Earth's system of tectonic plates. As the plates move across the Earth's surface, they interact with one another at plate boundaries, which are places where earthquakes and volcanoes are common. Typically, plate boundaries are also places of great mineral wealth. Lava flows from this spectacular volcano have built the island of Hawaii. Professor Iain Stewart explains how Mount Kilauea's eruptions of lava have built up the island of Hawaii over millions of years as a magma plume known as a hotspot rises up through the Earth's crust. A shield volcano is a type of volcano usually built almost entirely of fluid magma flows. They are named for their large size and low profile, resembling a warrior's shield lying on the ground. This is caused by the highly fluid lava they erupt, which travels farther than lava erupted from stratovolcanoes. This results in the steady accumulation of broad sheets of lava, building up the shield volcano's distinctive form. Shield volcanoes shape is due to the low-viscosity magma of their mafic lava.
http://www.bbc.co.uk/science/earth/surface_and_interior/shield_volcano
4.03125
Federalism is a political concept describing the practice whereby a group of members are bound together by agreement or covenant (Latin: foedus, covenant) with a ... About the US Government System of Federalism and How it Works to Divide Powers Between State and Federal Government. Federalism. Federalism is one of the most important and innovative concepts in the U.S. Constitution, although the word never appears there. Federalism is the sharing ... federalism definition. A system of government in which power is divided between a national (federal) government and various regional governments. Although the federal system seems to strike a perfect balance of power between national and local needs, federations still have internal power struggles. Federalism. A principle of government that defines the relationship between the central government at the national level and its constituent units at the regional ... Federalism is a system of government in which the same territory is controlled by two levels of government. Generally, an overarching national government governs ... Seen and Heard. What made you want to look up federalism? Please tell us where you read or heard it (including the quote, if possible). Federalism is the theory or advocacy of federal principles for dividing powers between member units and common institutions. Unlike in a unitary state ... Federalism in the United States is the constitutional relationship between U.S. state governments and the federal government of the United States.
https://www.search.com/reference/Federalism
4.1875
In observational astronomy an Einstein ring, also known as an Einstein-Chwolson ring or Chwolson ring, is the deformation of the light from a source (such as a galaxy or star) into a ring through gravitational lensing of the source's light by an object with an extremely large mass (such as another galaxy or a black hole). This occurs when the source, lens, and observer are all aligned. The first complete Einstein ring, designated B1938+666, was discovered by collaboration between astronomers at the University of Manchester and NASA's Hubble Space Telescope in 1998. Gravitational lensing is predicted by Albert Einstein's theory of general relativity. Instead of light from a source traveling in a straight line (in three dimensions), it is bent by the presence of a massive body, which distorts spacetime. An Einstein Ring is a special case of gravitational lensing, caused by the exact alignment of the source, lens, and observer. This results in a symmetry around the lens, causing a ring-like structure. - is the gravitational constant, - is the mass of the lens, - is the speed of light, - is the angular diameter distance to the lens, - is the angular diameter distance to the source, and - is the angular diameter distance between the lens and the source. Note that, over cosmological distances in general. The bending of light by a gravitational body was predicted by Albert Einstein in 1912, a few years before the publication of General Relativity in 1916 (Renn et al. 1997). The ring effect was first mentioned in academic literature by Orest Chwolson in 1924. Einstein remarked upon this effect in 1936 in a paper prompted by a letter by a Czech engineer, R W Mandl , but stated Of course, there is no hope of observing this phenomenon directly. First, we shall scarcely ever approach closely enough to such a central line. Second, the angle β will defy the resolving power of our instruments.— Science vol 84 p 506 1936 In this statement, β is the Einstein Radius currently denoted by (as in the expression above). However, Einstein was only considering the chance of observing Einstein rings produced by stars, which is low; however, the chance of observing those produced by larger lenses such as galaxies or black holes is higher since the angular size of an Einstein ring increases with the mass of the lens. Known Einstein rings Hundreds of gravitational lenses are currently known. About half a dozen of them are partial Einstein rings with diameters up to an arcsecond, although as either the mass distribution of the lenses is not perfectly axially symmetrical, or the source, lens, and observer are not perfectly aligned, we have yet to see a perfect Einstein ring. Most rings have been discovered in the radio range. The degree of completeness needed for an image seen through a gravitational lens to qualify as an Einstein ring is yet to be defined. The first Einstein ring was discovered by Hewitt et al. (1988), who observed the radio source MG1131+0456 using the Very Large Array. This observation saw a quasar lensed by a nearer galaxy into two separate but very similar images of the same object, the images stretched round the lens into an almost complete ring. These dual images are another possible effect of the source, lens, and observer not being perfectly aligned. The first complete Einstein ring to be discovered was B1938+666, which was found by King et al. (1998) via optical follow-up with the Hubble Space Telescope of a gravitational lens imaged with MERLIN. The galaxy causing the lens at B1938+666 is an ancient elliptical galaxy, and the image we see through the lens is a dark dwarf satellite galaxy, which we would otherwise not be able to see with current technology. In 2005, the combined power of the Sloan Digital Sky Survey (SDSS) with the Hubble Space Telescope was used in the Sloan Lens ACS (SLACS) Survey to find 19 new gravitational lenses, 8 of which showed Einstein rings, these are the 8 shown in the image to the right. As of 2009 this survey has found 85 confirmed gravitational lenses, there is not yet a number for how many show Einstein rings. This survey is responsible for most of the recent discoveries of Einstein rings in the optical range, following are some examples which were found: - FOR J0332-3557, discovered by Remi Cabanac et al. in 2005, notable for its high redshift which allows us to use it to make observations about the early universe. - The "Cosmic Horseshoe" is a partial Einstein ring which was observed through the gravitational lens of LRG 3-757, a distinctively large Luminous Red Galaxy. It was discovered in 2007 by V. Belokurov et al. - SDSSJ0946+1006, the "double Einstein ring" was discovered by Raphael Gavazzi and Tomasso Treu in 2008, notable for the presence of multiple rings observed through the same gravitational lens, the significance of which is explained in the next section on extra rings. Another example is the radio/X-Ray Einstein ring around PKS 1830-211, which is unusually strong in radio. It was discovered in X-Ray by Varsha Gupta et al. at the Chandra X-Ray observatory It is also notable for being the first case of a quasar being lensed by an almost face-on spiral galaxy. Using the Hubble Space Telescope, a double ring has been found by Raphael Gavazzi of the STScI and Tommaso Treu of the University of California, Santa Barbara. This arises from the light from three galaxies at distances of 3, 6, and 11 billion light years. Such rings help in understanding the distribution of dark matter, dark energy, the nature of distant galaxies, and the curvature of the universe. The odds of finding such a double ring are 1 in 10,000. Sampling 50 suitable double rings would provide astronomers with a more accurate measurement of the dark matter content of the universe and the equation of state of the dark energy to within 10 percent precision. To the right is a simulation depicting a zoom on a Schwarzschild black hole in front of the Milky Way. The first Einstein ring corresponds to the most distorted region of the picture and is clearly depicted by the galactic disc. The zoom then reveals a series of 4 extra rings, increasingly thinner and closer to the black hole shadow. They are easily seen through the multiple images of the galactic disk. The odd-numbered rings correspond to points which are behind the black hole (from the observer's position) and correspond here to the bright yellow region of the galactic disc (close to the galactic center), whereas the even-numbered rings correspond to images of objects which are behind the observer, which appear bluer since the corresponding part of the galactic disc is thinner and hence dimmer here. |Wikimedia Commons has media related to Einstein Rings.| - Drakeford, Jason; Corum, Jonathan; Overbye, Dennis (March 5, 2015). "Einstein’s Telescope - video (02:32)". New York Times. Retrieved December 27, 2015. - Overbye, Dennis (March 5, 2015). "Astronomers Observe Supernova and Find They’re Watching Reruns". New York Times. Retrieved March 5, 2015. - "A Bull's Eye for MERLIN and the Hubble". University of Manchester. 27 March 1998. - "ALMA at Full Stretch Yields Spectacular Images". ESO Announcement. Retrieved 22 April 2015. - Belokurov, V.; et al. (January 2009). "Two new large-separation gravitational lenses from SDSS". Monthly Notices of the Royal Astronomical Society 392 (1): 104–112. arXiv:0806.4188. Bibcode:2009MNRAS.392..104B. doi:10.1111/j.1365-2966.2008.14075.x. Retrieved 2015-10-14. - Loff, Sarah; Dunbar, Brian (10 February 2015). "Hubble Sees A Smiling Lens". NASA. Retrieved 10 February 2015. - "Discovery of the First "Einstein Ring" Gravitational Lens". NRAO. 2000. Retrieved 2012-02-08. - Browne, Malcolm W. (1998-03-31). "'Einstein Ring' Caused by Space Warping Is Found". The New York Times. Retrieved 2010-05-01. - Vegetti, Simona; et al. (January 2012). "Gravitational detection of a low-mass dark satellite at cosmological distance". Nature 481 (7381): 341–343. arXiv:1201.3643. Bibcode:2012Natur.481..341V. doi:10.1038/nature10669. Retrieved 16 July 2014. - Bolton, A; et al. "Hubble, Sloan Quadruple Number of Known Optical Einstein Rings". Hubblesite. Retrieved 2014-07-16. - Auger, Matt; et al. (November 2009). "The Sloan Lens ACS Survey. IX. Colors, Lensing and Stellar Masses of Early-type Galaxies". The Astrophysical Journal 705 (2): 1099–1115. arXiv:0911.2471. Bibcode:2009ApJ...705.1099A. doi:10.1088/0004-637X/705/2/1099. Retrieved 16 July 2014. - Cabanac, Remi; et al. (2005-04-27). "Discovery of a high-redshift Einstein ring". Astronomy and Astrophysics 436 (2): L21–L25. arXiv:astro-ph/0504585. Bibcode:2005A&A...436L..21C. doi:10.1051/0004-6361:200500115. Retrieved 2014-07-15. - Belokurov, V.; et al. (December 2007). "The Cosmic Horseshoe: Discovery of an Einstein Ring around a Giant Luminous Red Galaxy". The Astrophysical Journal 671 (1): L9–L12. arXiv:0706.2326. Bibcode:2007ApJ...671L...9B. doi:10.1086/524948. Retrieved 2014-07-15. - Gavazzi, Raphael; et al. (April 2008). "The Sloan Lens ACS Survey. VI: Discovery and Analysis of a Double Einstein Ring". The Astrophysical Journal 677 (2): 1046–1059. arXiv:0801.1555. Bibcode:2008ApJ...677.1046G. doi:10.1086/529541. Retrieved 2014-04-15. - "Montage of the SDP.81 Einstein Ring and the lensed galaxy". Retrieved 9 June 2015. - Mathur, Smita; Nair, Sunita (20 July 1997). "X-Ray Absorption toward the Einstein Ring Source PKS 1830-211". The Astrophysical Journal 484: 140–144. arXiv:astro-ph/9703015. Bibcode:1997ApJ...484..140M. doi:10.1086/304327. Retrieved 16 July 2014. - Gupta, Varsha. "Chandra Detection of AN X-Ray Einstein Ring in PKS 1830-211". ResearchGate.net. Retrieved 16 July 2014. - Courbin, Frederic (August 2002). "Cosmic alignment towards the radio Einstein ring PKS 1830-211 ?". The Astrophysical Journal 575 (1): 95–102. arXiv:astro-ph/0202026. Bibcode:2002ApJ...575...95C. doi:10.1086/341261. Retrieved 16 July 2014. - Langston, G.I.; et al. (May 1989). "MG 1654+1346 - an Einstein Ring image of a quasar radio lobe". Astronomical Journal 97: 1283–1290. Bibcode:1989AJ.....97.1283L. doi:10.1086/115071. Retrieved 16 July 2014. - "Hubble Finds Double Einstein Ring". Hubblesite.org. Space Telescope Science Institute. Retrieved 2008-01-26. - Cabanac, R. A.; et al. (2005). "Discovery of a high-redshift Einstein ring". Astronomy and Astrophysics 436 (2): L21–L25. arXiv:astro-ph/0504585. Bibcode:2005A&A...436L..21C. doi:10.1051/0004-6361:200500115. (refers to FOR J0332-3357) - Chwolson, O (1924). "Über eine mögliche Form fiktiver Doppelsterne". Astronomische Nachrichten 221 (20): 329. Bibcode:1924AN....221..329C. doi:10.1002/asna.19242212003. (The first paper to propose rings) - Einstein, Albert (1936). "Lens-like Action of a Star by the Deviation of Light in the Gravitational Field" (PDF). Science 84 (2188): 506–507. Bibcode:1936Sci....84..506E. doi:10.1126/science.84.2188.506. PMID 17769014. (The famous Einstein Ring paper) - Hewitt, J (1988). "Unusual radio source MG1131+0456 - A possible Einstein ring". Nature 333: 537. Bibcode:1988Natur.333..537H. doi:10.1038/333537a0. - Renn, Jurgen; Sauer, Tilman; Stachel, John (1997). "The Origin of Gravitational Lensing: A Postscript to Einstein's 1936 Science paper". Science 275 (5297): 184–186. Bibcode:1997Sci...275..184R. doi:10.1126/science.275.5297.184. PMID 8985006. - King, L (1998). "A complete infrared Einstein ring in the gravitational lens system B1938 + 666". MNRAS 295: 41. arXiv:astro-ph/9710171. Bibcode:1998MNRAS.295L..41K. doi:10.1046/j.1365-8711.1998.295241.x.
https://en.wikipedia.org/wiki/Einstein_ring
4
7.4.2 Nitrogen Compounds The N cycle is integral to functioning of the Earth system and to climate (Vitousek et al., 1997; Holland et al., 2005a). Over the last century, human activities have dramatically increased emissions and removal of reactive N to the global atmosphere by as much as three to five fold. Perturbations of the N cycle affect the atmosphere climate system through production of three key N-containing trace gases: N2O, ammonia (NH3) and NOx (nitric oxide (NO) + nitrogen dioxide (NO2)). Nitrous oxide is the fourth largest single contributor to positive radiative forcing, and serves as the only long-lived atmospheric tracer of human perturbations of the global N cycle (Holland et al., 2005a). Nitrogen oxides have short atmospheric lifetimes of hours to days (Prather et al., 2001). The dominant impact of NOx emissions on the climate is through the formation of tropospheric ozone, the third largest single contributor to positive radiative forcing (Sections 2.3.6, 7.4.4). Emissions of NOx generate indirect negative radiative forcing by shortening the atmospheric lifetime of CH4 (Prather 2002). Ammonia contributes to the formation of sulphate and nitrate aerosols, thereby contributing to aerosol cooling and the aerosol indirect effect (Section 7.5), and to increased nutrient supply for the carbon cycle (Section 7.5). Ammonium and NOx are removed from the atmosphere by deposition, thus affecting the carbon cycle through increased nutrient supply (Section 18.104.22.168.3). Atmospheric concentrations of N2O have risen 16%, from about 270 ppb during the pre-industrial era to 319 ppb in 2005 (Figure 7.16a). The average annual growth rate for 1999 to 2000 was 0.85 to 1.1 ppb yr–1, or about 0.3% per year (WMO, 2003). The main change in the global N2O budget since the TAR is quantification of the substantial human-driven emission of N2O (Table 7.7; Naqvi et al., 2000; Nevison et al., 2004; Kroeze et al., 2005; Hirsch et al., 2006). The annual source of N2O from the Earth’s surface has increased by about 40 to 50% over pre-industrial levels as a result of human activity (Hirsch et al., 2006). Human activity has increased N supply to coastal and open oceans, resulting in decreased O2 availability and N2O emissions (Naqvi et al., 2000; Nevison et al., 2004). Figure 7.16. (a) Changes in the emissions of fuel combustion NOx and atmospheric N2O mixing ratios since 1750. Mixing ratios of N2O provide the atmospheric measurement constraint on global changes in the N cycle. (b) Changes in the indices of the global agricultural N cycle since 1850: the production of manure, fertilizer and estimates of crop N fixation. For data sources see http://www-eosdis.ornl.gov/ (Holland et al., 2005b) and http://www.cmdl.noaa.gov/. Figure adapted from Holland et al. (2005c). Since the TAR, both top-down and bottom-up estimates of N2O have been refined. Agriculture remains the single biggest anthropogenic N2O source (Bouwman et al., 2002; Smith and Conen, 2004; Del Grosso et al., 2005). Land use change continues to affect N2O and NO emissions (Neill et al., 2005): logging is estimated to increase N2O and NO emissions by 30 to 350% depending on conditions (Keller et al., 2005). Both studies underscore the importance of N supply, temperature and moisture as regulators of trace gas emissions. The inclusion of several minor sources (human excreta, landfills and atmospheric deposition) has increased the total bottom-up budget to 20.6 TgN yr–1 (Bouwman et al., 2002). Sources of N2O now estimated since the TAR include coastal N2O fluxes of 0.2 TgN yr–1 (±70%; Nevison et al., 2004) and river and estuarine N2O fluxes of 1.5 TgN yr–1 (Kroeze et al., 2005). Box model calculations show the additional river and estuarine sources to be consistent with the observed rise in atmospheric N2O (Kroeze et al., 2005). Top-down estimates of surface sources use observed concentrations to constrain total sources and their spatial distributions. A simple calculation, using the present-day N2O burden divided by its atmospheric lifetime, yields a global stratospheric loss of about 12.5 ± 2.5 TgN yr–1. Combined with the atmospheric increase, this loss yields a surface source of 16 TgN yr–1. An inverse modelling study of the surface flux of N2O yields a global source of 17.2 to 17.4 TgN yr–1 with an estimated uncertainty of 1.4 (1 standard deviation; Hirsch et al., 2006). The largest sources of N2O are from land at tropical latitudes, the majority located north of the equator. The Hirsch et al. inversion results further suggest that N2O source estimates from agriculture and fertilizer may have increased markedly over the last three decades when compared with an earlier inverse model estimate (Prinn et al., 1990). Bottom-up estimates, which sum individual source estimates, are more evenly distributed with latitude and lack temporal variability. However, there is clear consistency between top-down and bottom-up global source estimates, which are 17.3 (15.8–18.4) and 17.7 (8.5–27.7) TgN yr–1, respectively. Concentrations of NOx and reduced nitrogen (NHx = NH3 + ammonium ion (NH4+)) are difficult to measure because the atmospheric lifetimes of hours to days instead of years generate pronounced spatial and temporal variations in their distributions. Atmospheric concentrations of NOx and NHx vary more regionally and temporally than concentrations of N2O. Total global NOx emissions have increased from an estimated pre-industrial value of 12 TgN yr–1 (Holland et al., 1999; Galloway et al., 2004) to between 42 and 47 TgN yr–1 in 2000 (Table 7.7). Lamarque et al. (2005a) forecast them to be 105 to 131 TgN yr–1 by 2100. The range of surface NOx emissions (excluding lightning and aircraft) used in the current generation of global models is 33 to 45 TgN yr–1 with small ranges for individual sources. The agreement reflects the use of similar inventories and parametrizations. Current estimates of NOx emissions from fossil fuel combustion are smaller than in the TAR. Since the TAR, estimates of tropospheric NO2 columns from space by the Global Ozone Monitoring Experiment (GOME, launched in 1995) and the SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY, launched in 2002) (Richter and Burrows, 2002; Heue et al., 2005) provide constraints on estimates of NOx emissions (Leue et al., 2001). Martin et al. (2003a) use GOME data to estimate a global surface source of NOx of 38 TgN yr–1 for 1996 to 1997 with an uncertainty factor of 1.6. Jaeglé et al. (2005) partition the surface NOx source inferred from GOME into 25.6 TgN yr–1 from fuels, 5.9 TgN yr–1 from biomass burning and 8.9 TgN yr–1 from soils. Interactions between soil emissions and scavenging by plant canopies have a significant impact on soil NOx emissions to the free troposphere: the impact may be greatest in subtropical and tropical regions where emissions from fuel combustion are rising (Ganzeveld et al., 2002). Boersma et al. (2005) find that GOME data constrain the global lightning NOx source for 1997 to the range 1.1 to 6.4 TgN yr–1. Comparison of the tropospheric NO2 column of three state-of-the-art retrievals from GOME for the year 2000 with model results from 17 global atmospheric chemistry models highlights significant differences among the various models and among the three GOME retrievals (Figure 7.17, van Noije et al., 2006). The discrepancies among the retrievals (10 to 50% in the annual mean over polluted regions) indicate that the previously estimated retrieval uncertainties have a large systematic component. Top-down estimates of NOx emissions from satellite retrievals of tropospheric NO2 are strongly dependent on the choice of model and retrieval. Figure 7.17. Tropospheric column NO2 from (a) satellite measurements and (b) atmospheric chemistry models. The maps represent ensemble average annual mean tropospheric NO2 column density maps for the year 2000. The satellite retrieval ensemble comprises three state-of-the-art retrievals from GOME; the model ensemble includes 17 global atmospheric chemistry models. These maps were obtained after smoothing the data to a common horizontal resolution of 5° × 5° (adapted from van Noije et al., 2006). Knowledge of the spatial distribution of NOx emissions has evolved significantly since the TAR. An Asian increase in emissions has been compensated by a European decrease over the past decade (Naja et al., 2003). Richter et al. (2005; see also Irie et al., 2005) use trends for 1996 to 2004 observed by GOME and SCIAMACHY to deduce a 50% increase in NOx emissions over industrial areas of China. Observations of NO2 in shipping lanes from GOME (Beirle et al., 2004) and SCIAMACHY (Richter et al., 2004) give values at the low end of emission inventories. Data from GOME and SCIAMACHY further reveal large pulses of soil NOx emissions associated with rain (Jaeglé et al., 2004) and fertilizer application (Bertram et al., 2005). All indices show an increase since pre-industrial times in the intensity of agricultural nitrogen cycling, the primary source of NH3 emissions (Figure 7.16b and Table 7.7; Bouwman et al., 2002). Total global NH3 emissions have increased from an estimated pre-industrial value of 11 TgN yr–1 to 54 TgN yr–1 for 2000 (Holland et al., 1999; Galloway et al., 2004), and are projected to increase to 116 TgN yr–1 by 2050. Table 7.7. Global sources (TgN yr–1) of NOx, NH3 and N2O for the 1990s. |Source ||NOx ||NH3 ||N2O | |TARa ||AR4b ||TARa ||AR4a ||TARc ||AR4 | |Anthropogenic sources || || || || || || | |Fossil fuel combustion & industrial processes ||33 (20–24) ||25.6 (21–28) ||0.3 (0.1–0.5) ||2.5d ||1.3/0.7 (0.2–1.8) ||0.7 (0.2–1.8)d | |Aircraft ||0.7 (0.2–0.9) ||– e (0.5–0.8) ||- ||- ||- ||- | |Agriculture ||2.3f (0–4) ||1.6g ||34.2 (16–48) ||35g (16–48) ||6.3/2.9 (0.9–17.9) ||2.8 (1.7–4.8)g | |Biomass and biofuel burning ||7.1 (2–12) ||5.9 (6–12) ||5.7 (3–8) ||5.4d (3–8) ||0.5 (0.2–1.0) ||0.7 (0.2–1.0)g | |Human excreta ||– ||– ||2.6 (1.3–3.9) ||2.6g (1.3–3.9) ||– ||0.2g (0.1–0.3)h | |Rivers, estuaries, coastal zones ||– ||– ||– ||– ||– ||1.7 (0.5–2.9)i | |Atmospheric deposition ||– ||0.3g ||– ||– ||– ||0.6j (0.3–0.9)h | |Anthropogenic total ||43.1 ||33.4 ||42.8 ||45.5 ||8.1/4.1 ||6.7 | |Natural sources || || || || || || | |Soils under natural vegetation ||3.3f (3–8) ||7.3j (5–8) ||2.4 (1–10) ||2.4g (1–10) ||6.0/6.6 (3.3–9.9) ||6.6 (3.3–9.0)g | |Oceans ||– ||– ||8.2 (3–16) ||8.2g (3–16) ||3.0/3.6 (1.0–5.7) ||3.8 (1.8–5.8)k | |Lightning ||5 (2–12) ||1.1–6.4 (3–7) ||– ||– ||– ||– | |Atmospheric chemistry ||<0.5 ||– ||– ||– ||0.6 (0.3–1.2) ||0.6 (0.3–1.2)c | |Natural total ||8.8 ||8.4–13.7 ||10.6 ||10.6 ||9.6/10.8 ||11.0 | |Total sources ||51.9 (27.2–60.9) ||41.8–47.1 (37.4–57.7) ||53.4 (40–70) ||56.1 (26.8–78.4) ||17.7/14.9 (5.9–37.5) ||17.7 (8.5–27.7) | The primary sink for NHx and NOx and their reaction products is wet and dry deposition. Estimates of the removal rates of both NHx and NOx are provided by measurements of wet deposition over the USA and Western Europe to quantify acid rain inputs (Hauglustaine et al., 2004; Holland et al., 2005a; Lamarque et al., 2005a). Chemical transport models represent the wet and dry deposition of NOx and NHx and their reaction products. A study of 29 simulations with 6 different tropospheric chemistry models, focusing on present-day and 2100 conditions for NOx and its reaction products, projects an average increase in N deposition over land by a factor of 2.5 by 2100 (Lamarque et al., 2005b), mostly due to increases in NOx emissions. Nitrogen deposition rates over Asia are projected to increase by a factor of 1.4 to 2 by 2030. Climate contributions to the changes in oxidized N deposition are limited by the models’ ability to represent changes in precipitation patterns. An intercomparison of 26 global atmospheric chemistry models demonstrates that current scenarios and projections are not sufficient to stabilise or reduce N deposition or ozone pollution before 2030 (Dentener et al., 2006).
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch7s7-4-2.html
4.09375
In United States history, the Redeemers were a white political coalition in the Southern United States during the Reconstruction era that followed the Civil War. Redeemers were the southern wing of the Bourbon Democrats, the conservative, pro-business faction in the Democratic Party, who pursued a policy of Redemption, seeking to oust the Radical Republican coalition of freedmen, "carpetbaggers", and "scalawags". They generally were led by the rich landowners, businessmen and professionals, and dominated Southern politics in most areas from the 1870s to 1910. During Reconstruction, the South was under occupation by federal forces and Southern state governments were dominated by Republicans. Republicans nationally pressed for the granting of political rights to the newly freed slaves as the key to their becoming full citizens. The Thirteenth Amendment (banning slavery), Fourteenth Amendment (guaranteeing the civil rights of former slaves and ensuring equal protection of the laws), and Fifteenth Amendment (prohibiting the denial of the right to vote on grounds of race, color, or previous condition of servitude) enshrined such political rights in the Constitution. Numerous educated blacks moved to the South to work for Reconstruction, and some blacks attained positions of political power under these conditions. However, the Reconstruction governments were unpopular with many white Southerners, who were not willing to accept defeat and continued to try to prevent black political activity by any means. While the elite planter class often supported insurgencies, violence against freedmen and other Republicans was often carried out by other whites; insurgency took the form of the secret Ku Klux Klan in the first years after the war. In the 1870s, secret paramilitary organizations, such as the White League in Louisiana and Red Shirts in Mississippi and North Carolina undermined the opposition. These paramilitary bands used violence and threats to undermine the Republican vote. By the presidential election of 1876, only three Southern states – Louisiana, South Carolina, and Florida – were "unredeemed", or not yet taken over by white Democrats. The disputed Presidential election between Rutherford B. Hayes (the Republican governor of Ohio) and Samuel J. Tilden (the Democratic governor of New York) was allegedly resolved by the Compromise of 1877, also known as the Corrupt Bargain or the Bargain of 1877. In this compromise, it was claimed, Hayes became President in exchange for numerous favors to the South, one of which was the removal of Federal troops from the remaining "unredeemed" Southern states; this was however a policy Hayes had endorsed during his campaign. With the removal of these forces, Reconstruction came to an end. In the 1870s, southern Democrats began to muster more political power as former Confederates began to vote again. It was a movement that gathered energy up until the Compromise of 1877, in the process known as the Redemption. White Democratic Southerners saw themselves as redeeming the South by regaining power. They appealed to scalawags (white Southerners who supported the Republican Party after the civil war and during the time of reconstruction). More importantly, in a second wave of violence following the suppression of the Ku Klux Klan, violence began to increase in the Deep South. In 1868 white terrorists tried to prevent Republicans from winning the fall election in Louisiana. Over a few days, they killed some two hundred freedmen in St. Landry Parish. Other violence erupted. From April to October, there were 1,081 political murders in Louisiana, in which most of the victims were freedmen. Violence was part of campaigns prior to the election of 1872 in several states. In 1874 and 1875, more formal paramilitary groups affiliated with the Democratic Party conducted intimidation, terrorism and violence against black voters and their allies to reduce Republican voting and turn officeholders out. These included the White League and Red Shirts. They worked openly for specific political ends, and often solicited coverage of their activities by the press. Every election from 1868 on was surrounded by intimidation and violence; they were usually marked by fraud as well. In the aftermath of the disputed gubernatorial election of 1872 in Louisiana, for instance, the competing governors each certified slates of local officers. This situation contributed to the Colfax Massacre of 1873, in which white Democratic militia killed more than 100 Republican blacks in a confrontation over control of parish offices. Three whites died in the violence. In 1874 remnants of white militia formed the White League, a Democratic paramilitary group started first in Grant Parish of the Red River area of Louisiana, with chapters arising across the state, especially in rural areas. In August the White League turned out six Republican office holders in Coushatta, Louisiana, and told them to leave the state. Before they could make their way, they and five to twenty black witnesses were assassinated. In September, thousands of armed white militia, supporters of the Democratic gubernatorial candidate John McEnery fought against New Orleans police and state militia in what was called the "Battle of Liberty Place". They took over the state government offices in New Orleans and occupied the capitol and armory. They turned Republican governor William Pitt Kellogg out of office, and retreated only in the face of the arrival of Federal troops sent by President Ulysses S. Grant. [In 1874 the White League turned out six Republican officeholders in Coushatta, Louisiana and told them to leave the state. Before they could make their way, they and five to twenty black witnesses were assassinated by white paramilitary. In 1874 such remnants of white militia formed the White League, a Democratic paramilitary group started first in Grant Parish of the Red River area of Louisiana, with chapters rising across the state, especially in rural areas.] Similarly, in Mississippi, the Red Shirts formed as a prominent paramilitary group that enforced Democratic voting by intimidation and murder. Chapters of paramilitary Red Shirts arose and were active in North Carolina and South Carolina as well. They disrupted Republican meetings, killed leaders and officeholders, intimidated voters at the polls, or kept them away altogether. The Redeemers' program emphasized opposition to the Republican governments, which they considered to be corrupt and a violation of true republican principles. They also worked to reestablish white supremacy. The crippling national economic problems and reliance on cotton meant that the South was struggling financially. Redeemers denounced taxes higher than what they had known before the war. At that time, however, the states had few functions, and planters maintained private institutions only. Redeemers wanted to reduce state debts. Once in power, they typically cut government spending; shortened legislative sessions; lowered politicians' salaries; scaled back public aid to railroads and corporations; and reduced support for the new systems of public education and some welfare institutions. As Democrats took over state legislatures, they worked to change voter registration rules to strip most blacks and many poor whites of their ability to vote. Blacks continued to vote in significant numbers well into the 1880s, with many winning local offices. Black Congressmen continued to be elected, albeit in ever smaller numbers, until the 1890s. George Henry White, the last Southern black of the post-Reconstruction period to serve in Congress, retired in 1901, leaving Congress completely white. In the 1890s, the Democrats faced challenges with the Agrarian Revolt, when their control of the South was threatened by the Farmers Alliance, the effects of Bimetallism and the newly created People's Party. On the national level, William Jennings Bryan defeated the Bourbons and took control of the Democratic Party nationwide. Democrats worked hard to prevent such populist coalitions. In the former Confederate South, from 1890 to 1908, starting with Mississippi, legislatures of ten of the eleven states passed disfranchising constitutions, which had new provisions for poll taxes, literacy tests, residency requirements and other devices that effectively disfranchised nearly all blacks and tens of thousands of poor whites. Hundreds of thousands of people were removed from voter registration rolls soon after these provisions were implemented. In Alabama, for instance, in 1900 fourteen Black Belt counties had 79,311 voters on the rolls; by June 1, 1903, after the new constitution was passed, registration had dropped to just 1,081. Statewide Alabama in 1900 had 181,315 blacks eligible to vote. By 1903 only 2,980 were registered, although at least 74,000 were literate. From 1900 to 1903, white registered voters fell by more than 40,000, although their population grew in overall number. By 1941, more poor whites than blacks had been disfranchised in Alabama, mostly due to effects of the cumulative poll tax. Estimates were that 600,000 whites and 500,000 blacks had been disfranchised. African Americans and poor whites were shut out of the political process and disfranchised. Southern legislatures passed Jim Crow laws imposing segregation in public facilities and places. The discrimination, segregation and disfranchisement lasted well into the later decades of the 20th century. They were shut out of all offices at the local, state, as well as federal levels, as those who could not vote could not run for office or serve on juries. While Congress had actively intervened for more than 20 years in elections in the South which the House Elections Committee judged to be flawed, after 1896 it backed off from intervening. Many Northern legislators were outraged about the disfranchisement of blacks and some proposed reducing Southern representation in Congress. They never managed to accomplish that, as Southern representatives formed a strong, one-party voting block for decades. Although educated African Americans mounted legal challenges (with many secretly funded by educator Booker T. Washington and his northern allies), the Supreme Court upheld Mississippi's and Alabama's provisions in its rulings in Williams v. Mississippi (1898) and Giles v. Harris (1903). People in the movement chose the term "Redemption" from Christian theology. Historian Daniel W. Stowell concludes that white Southerners appropriated the term to describe the political transformation they desired, that is, the end of Reconstruction. This term helped unify numerous white voters, and encompassed efforts to purge southern society of its sins and to remove Republican political leaders. It also represented the birth of a new southern society, rather than a return to its antebellum predecessor. Historian Gaines M. Foster explains how the South became known as the "Bible Belt" by connecting this characterization with changing attitudes caused by slavery's demise. Freed from preoccupation with federal intervention over slavery, and even citing it as precedent, white southerners joined northerners in the national crusade to legislate morality. Viewed by some as a "bulwark of morality", the largely Protestant South took on a Bible Belt identity long before H. L. Mencken coined the term. The "redeemed" South When Reconstruction died, so did all hope for national enforcement of adherence to the constitutional amendments that the U.S. Congress had passed in the wake of the Civil War. As the last Federal troops left the ex-Confederacy, two old foes of American politics reappeared at the heart of the Southern polity – the twin, inflammatory issues of state rights and race. It was precisely on the ground of these two issues that the Civil War had broken out, and in 1877, sixteen years after the secession crisis, the South reaffirmed control over them. "The slave went free; stood a brief moment in the sun; then moved back again toward slavery", wrote W. E. B. Du Bois. The black community in the South was brought back under the yoke of the Southern Democrats, who had been politically undermined during Reconstruction. Whites in the South were committed to reestablish its own sociopolitical structure with the goal of a new social order enforcing racial subordination and labor control. While the Republicans succeeded in maintaining some power in part of the Upper South, such as Tennessee, in the Deep South there was a return to "home rule". In the aftermath of the Compromise of 1877, Southern Democrats held the South's black community under increasingly tight control. Politically, blacks were gradually evicted from public office, as the few that remained saw the sway they held over local politics considerably decreased. Socially, the situation was worse, as the Southern Democrats tightened their grip on the labor force. Vagrancy and "anti-enticement" laws were reinstituted. It became illegal to be jobless, or to leave a job before the contract expired. Economically, the blacks were stripped of independence, as new laws gave white planters the control over credit lines and property. Effectively, the black community was placed under a three-fold subjugation that was reminiscent of slavery. In the years immediately following Reconstruction, most blacks and former abolitionists held that Reconstruction lost the struggle for civil rights for black people because of violence against blacks and against white Republicans. Frederick Douglass and Reconstruction Congressman John R. Lynch cited the withdrawal of federal troops from the South as a primary reason for the loss of voting rights and other civil rights by African Americans after 1877. By the turn of the 20th century, white historians, led by the Dunning School, saw Reconstruction as a failure because of its political and financial corruption, its failure to heal the hatreds of the war, and its control by self-serving northern politicians, such as the people around President Grant. Historian Claude Bowers said that the worst part of what he called "the Tragic Era" was the extension of voting rights to freedmen, a policy he claimed led to misgovernment and corruption. The freedmen, the Dunning School historians argued, were not at fault because they were manipulated by corrupt white carpetbaggers interested only in raiding the state treasury and staying in power. They agreed the South had to be "redeemed" by foes of corruption. Reconstruction, in short, violated the values of "republicanism" and they classified all Republicans as "extremists". This interpretation of events was the hallmark of the Dunning School which dominated most history textbooks from 1900 to the 1960s. Beginning in the 1930s, historians such as C. Vann Woodward and Howard K. Beale attacked the "redemptionist" interpretation of Reconstruction, calling themselves "revisionists" and claimed that the real issues were economic. The Northern Radicals were tools of the railroads, and the Republicans in the South were manipulated to do their bidding. The Redeemers, furthermore, were also tools of the railroads and were themselves corrupt. In 1935, W. E. B. Du Bois published a Marxist analysis in his Black Reconstruction: An Essay toward a History of the Part which Black Folk Played in the Attempt to Reconstruct Democracy in America, 1860–1880. His book emphasized the role of African Americans during Reconstruction, noted their collaboration with whites, their lack of majority in most legislatures, and also the achievements of Reconstruction: establishing universal public education, improving prisons, establishing orphanages and other charitable institutions, and trying to improve state funding for the welfare of all citizens. He also noted that despite complaints, most Southern states kept the constitutions of Reconstruction for many years, some for a quarter of a century. By the 1960s, neo-abolitionist historians led by Kenneth Stampp and Eric Foner focused on the struggle of freedmen. While acknowledging corruption in the Reconstruction era, they hold that the Dunning School over-emphasized it while ignoring the worst violations of republican principles — namely denying African Americans their civil rights, including their right to vote. Supreme Court challenges Although African Americans mounted legal challenges, the U.S. Supreme Court upheld Mississippi's and Alabama's provisions in its rulings in Williams v. Mississippi (1898), Giles v. Harris (1903), and Giles v. Teasley. Booker T. Washington secretly helped fund and arrange representation for such legal challenges, raising money from northern patrons who helped support Tuskegee University. When white primaries were ruled unconstitutional by the Supreme Court in Smith v. Allwright (1944), civil rights organizations rushed to register African-American voters. By 1947 the All-Citizens Registration Committee (ACRC) of Atlanta managed to get 125,000 voters registered in Georgia, raising black participation to 18.8% of those eligible. This was a major increase from the 20,000 on the rolls who had managed to get through administrative barriers in 1940. Georgia, among other Southern states, passed new legislation (1958) to once again repress black voter registration. It was not until African-American leaders gained passage of the Civil Rights Act of 1957, the Civil Rights Act of 1964, and the Voting Rights Act of 1965 that the American citizens who were first granted suffrage by the Fifteenth Amendment after the Civil War finally regained the ability to exercise their right to vote. - Jim Crow laws - Disfranchisement after the Reconstruction Era - Phoenix Election Riot, in South Carolina - Wes Allison, "Election 2000 much like Election 1876", St. Petersburg Times, November 17, 2000. - Charles Lane, The Day Freedom Died, Henry Holt & Co., 2009, pp. 18–19. - Glenn Feldman, The Disfranchisement Myth: Poor Whites and Suffrage Restriction in Alabama, Athens: University of Georgia Press, 2004, p. 136. - COMMITTEE AT ODDS ON REAPPORTIONMENT, The New York Times, December 21, 1900; accessed March 10, 2008. - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol. 17, 2000, pp. 12 and 21, accessed March 10, 2008. - Blum and Poole (2005). - Eric Foner, "A Short History of Reconstruction: 1863–1877", New York: Harper & Row Publishers, 1990, p. 249 - Foner, "A Short History of Reconstruction" (1990), p. 250. - Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon," Constitutional Commentary, Vol. 17, 2000, pp. 12 and 21], accessed March 10, 2008. - Chandler Davidson and Bernard Grofman, Quiet Revolution in the South: The Impact of the Voting Rights Act, Princeton: Princeton University Press, 1994, p. 70. - Ayers, Edward L. The Promise of the New South: Life after Reconstruction (1993). - Baggett, James Alex. The Scalawags: Southern Dissenters in the Civil War and Reconstruction (2003), a statistical study of 732 Scalawags and 666 Redeemers. - Blum, Edward J., and W. Scott Poole, eds. Vale of Tears: New Essays on Religion and Reconstruction. Mercer University Press, 2005. ISBN 0-86554-987-7. - Du Bois, W. E. Burghardt. Black Reconstruction in America 1860-1880 (1935), explores the role of African Americans during Reconstruction - Foner, Eric. Reconstruction: America's Unfinished Revolution, 1863–1877 (2002). - Garner, James Wilford. Reconstruction in Mississippi (1901), a classic Dunning School text. - Gillette, William. Retreat from Reconstruction, 1869–1879 (1979). - Going, Allen J. "Alabama Bourbonism and Populism Revisited." Alabama Review 1983 36 (2): 83–109. ISSN 0002-4341. - Hart, Roger L. Redeemers, Bourbons, and Populists: Tennessee, 1870–1896. LSU Press, 1975. - Jones, Robert R. "James L. Kemper and the Virginia Redeemers Face the Race Question: A Reconsideration". Journal of Southern History, 1972 38 (3): 393–414. ISSN 0022-4642. - King, Ronald F. "A Most Corrupt Election: Louisiana in 1876." Studies in American Political Development, 2001 15(2): 123–137. ISSN 0898-588x. - King, Ronald F. "Counting the Votes: South Carolina's Stolen Election of 1876." Journal of Interdisciplinary History 2001 32 (2): 169–191. ISSN 0022-1953. - Moore, James Tice. "Redeemers Reconsidered: Change and Continuity in the Democratic South, 1870–1900" in the Journal of Southern History, Vol. 44, No. 3 (August 1978), pp. 357–378. - Moore, James Tice. "Origins of the Solid South: Redeemer Democrats and the Popular Will, 1870–1900." Southern Studies, 1983 22 (3): 285–301. ISSN 0735-8342. - Perman, Michael. The Road to Redemption: Southern Politics, 1869-1879. Chapel Hill, North Carolina: University of North Carolina Press, 1984. ISBN 0-8078-4141-2. - Perman, Michael. "Counter Reconstruction: The Role of Violence in Southern Redemption", in Eric Anderson and Alfred A. Moss, Jr, eds. The Facts of Reconstruction (1991) pp. 121–140. - Pildes, Richard H. "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, 17, (2000). - Polakoff, Keith I. The Politics of Inertia: The Election of 1876 and the End of Reconstruction (1973). - Rabonowitz, Howard K. Race Relations in the Urban South, 1865-1890 (1977). - Richardon, Heather Cox. The Death of Reconstruction (2001). - Wallenstein, Peter. From Slave South to New South: Public Policy in Nineteenth-Century Georgia (1987). - Wiggins; Sarah Woolfolk. The Scalawag in Alabama Politics, 1865—1881 (1991). - Williamson, Edward C. Florida Politics in the Gilded Age, 1877–1893 (1976). - Woodward, C. Vann. Origins of the New South, 1877–1913 (1951); emphasizes economic conflict between rich and poor. - Fleming, Walter L. Documentary History of Reconstruction: Political, Military, Social, Religious, Educational, and Industrial (1906), several hundred primary documents from all viewpoints - Hyman, Harold M., ed. The Radical Republicans and Reconstruction, 1861–1870 (1967), collection of longer speeches by Radical leaders - Lynch, John R. The Facts of Reconstruction(1913). Online text by African American member of the United States Congress during Reconstruction era.
https://en.wikipedia.org/wiki/Redeemers
4.03125
The Servicemen's Readjustment Act of 1944 (P.L. 78-346, 58 Stat. 284m), known informally as the G.I. Bill, was a law that provided a range of benefits for returning World War II veterans (commonly referred to as G.I.s). Benefits included low-cost mortgages, low-interest loans to start a business, cash payments of tuition and living expenses to attend university, high school or vocational education, as well as one year of unemployment compensation. It was available to every veteran who had been on active duty during the war years for at least one-hundred twenty days and had not been dishonorably discharged; combat was not required. By 1956, roughly 2.2 million veterans had used the G.I. Bill education benefits in order to attend colleges or universities, and an additional 5.6 million used these benefits for some kind of training program. Historians and economists judge the G.I. Bill a major political and economic success—especially in contrast to the treatments of World War I veterans—and a major contribution to America's stock of human capital that sped long-term economic growth. Canada operated a similar program for its World War II veterans, with an economic impact similar to the American case. Since the original U.S. 1944 law, the term has come to include other veteran benefit programs created to assist veterans of subsequent wars as well as peacetime service. - 1 History - 2 Issues - 3 Content - 4 MGIB comparison chart - 5 See also - 6 References - 7 Further reading - 8 External links On June 22, 1944, the Servicemen's Readjustment Act of 1944 was signed into law by President Roosevelt, commonly known as the G.I. Bill of Rights. During the war, politicians wanted to avoid the postwar confusion about veterans' benefits that became a political football in the 1920s and 1930s. President Franklin D. Roosevelt wanted a postwar assistance program to help transition from wartime, but he also wanted it on a need-basis for poor people, not just veterans. The veterans' organizations mobilized support in Congress that rejected FDR's approach and provided benefits only to veterans of military service, including men and women. Ortiz says their efforts "entrenched the VFW and the Legion as the twin pillars of the American veterans' lobby for decades." Harry W. Colmery, a former national commander of the American Legion and former Republican National Chairman, is credited for writing the first draft of the G.I. Bill. He reportedly jotted down his ideas on stationery and a napkin at the Mayflower Hotel in Washington, D.C. U.S. Senator Ernest McFarland, D-Arizona, was actively involved in the bill's passage and is known, with Warren Atherton, as one of the "fathers of the G.I. Bill." One might then term Edith Nourse Rogers, R-Mass, who helped write and who co-sponsored the legislation, as the "mother of the G.I. Bill". Like Colmery, her contribution to writing and passing this legislation has been obscured by time. The bill was introduced in the House on January 10, 1944, and in the Senate, the following day, both chambers approved their own versions of the bill. The bill that President Roosevelt initially proposed had a means test—only poor veterans would be aided. The G.I. Bill was created to prevent a repetition of the Bonus March of 1932 when World War I veterans protested for years they had not been rewarded. An important provision of the G.I. Bill was low interest, zero down payment home loans for servicemen, with more favorable terms for new construction compared to existing housing. This encouraged millions of American families to move out of urban apartments and into suburban homes. Another provision was known as the 52–20 clause. This enabled all former servicemen to receive $20 once a week for 52 weeks a year while they were looking for work. Less than 20 percent of the money set aside for the 52–20 Club was distributed. Rather, most returning servicemen quickly found jobs or pursued higher education. After World War II |This section does not cite any sources. (June 2008)| A look at the available statistics reveals that these later bills had an important influence on the lives of returning veterans, higher education, and the economy. A greater percentage of Vietnam veterans used G.I. Bill education benefits (72 percent) than World War II veterans (51 percent) or Korean War veterans (43 percent). Moreover, because of the ongoing military draft from 1940 to 1973, as many as one third of the population (when both veterans and their dependents are taken into account) were eligible for benefits from the expansion of veterans’ benefits. The success of the 1944 G.I. Bill prompted the government to offer similar measures to later generations of veterans. The Veterans’ Adjustment Act of 1952, signed into law on July 16, 1952, offered benefits to veterans of the Korean War that served for more than 90 days and had received an “other than dishonorable discharge.” Korean War veterans did not receive unemployment compensation—they were not members of the "52–20 Club" like World War II vets, but they were entitled to unemployment compensation starting at the end of a waiting period which was determined by the amount and disbursement dates of their mustering out pay. They could receive 26 weeks at $26 a week that the federal government would subsidize but administered by the various states. One improvement in the unemployment compensation for Korean War veterans was they could receive both state and federal benefits, the federal benefits beginning once state benefits were exhausted. One significant difference between the 1944 G.I. Bill and the 1952 Act was that tuition fees were no longer paid directly to the chosen institution of higher education. Instead, veterans received a fixed monthly sum of $110, which they used to pay for their tuition, fees, books, and living expenses. The decision to end direct tuition payments to schools came after a 1950 House select committee uncovered incidents of overcharging of tuition rates by some institutions under the original G.I. Bill in an attempt to defraud the government. Although the monthly stipend proved sufficient for most Korean War veterans, the decision would have negative repercussions for later veterans. By the end of the program on January 31, 1965, approximately 2.4 million of 5.5 million eligible veterans had used their benefits: roughly 1.2 million for higher education, over 860,000 for other education purposes, and 318,000 for occupational training. Over 1.5 million Korean War veterans obtained home loans. Whereas the G.I. Bills of 1944 and 1952 were given to compensate veterans for wartime service, the Veterans Readjustment Benefits Act of 1966 (P.L. 89-358) changed the nature of military service in America by extending benefits to veterans who served during times of war and peace. At first there was some opposition to the concept of a peacetime G.I. Bill. President Dwight Eisenhower had rejected such a measure in 1959 after the Bradley commission concluded that military service should be “an obligation of citizenship, not a basis for government benefits.” President Lyndon B. Johnson believed that many of his “Great Society” social programs negated the need for sweeping veterans benefits. But, prompted by unanimous support given the bill by Congress, Johnson signed it into law on March 3, 1966. Almost immediately critics[who?] within the veterans’ community and on Capitol Hill charged that the bill did not go far enough. At first, single veterans who had served more than 180 days and had received an “other than dishonorable discharge” received only $100 a month from which they had to pay for tuition and all of their expenses. Most found this amount to be sufficient to pay only for books and minor fees, and not enough to live on or attend college full-time. In particular, veterans of the Vietnam War disliked the fact that the bill did not provide them with the same educational opportunities as their World War II predecessors. Consequently, during the early years of the program, only about 25% of Vietnam veterans used their education benefits. In the next decade, efforts were made to increase veterans’ benefits. Congress succeeded, often in the face of fierce objections from the fiscally conservative Nixon and Ford Administrations, to raise benefit levels. In 1967, a single veteran’s benefits were raised to $130 a month; in 1970 they rose to $175; under the Readjustment Assistance Act of 1972 the monthly allowance rose to $220; in 1974 it rose to $270, $292 in 1976, and then $311 a month in 1977. As the funding levels increased, the numbers of veterans entering higher education rose correspondingly. In 1976, ten years after the first veterans became eligible, the highest number of Vietnam-era veterans were enrolled in colleges and universities. By the end of the program, proportionally more Vietnam-era veterans (6.8 million out of 10.3 million eligible) had used their benefits for higher education than any previous generation of veterans. The United States military moved to an all-volunteer force in 1973, and veterans continued to receive benefits, in part as an inducement to enlist, under the Veterans Educational Assistance Program (VEAP) and the Montgomery G.I. Bill (MGIB). From December 1976 through 1987, veterans received assistance under the VEAP. The VEAP departed from previous programs by requiring participants to make a contribution to their education benefits. The Veterans Administration then matched their contributions at a rate of 2 to 1. Enlisted personnel could contribute up to $100 a month up to a maximum of $2700. Benefits could be claimed for up to 36 months. To be eligible for VEAP, a veteran had to serve for more than 180 days and receive an “other than dishonorable discharge.” Nearly 700,000 veterans used their benefits for education and training under this program. In 1985, a bill sponsored by Democratic Congressman "Sonny" Gillespie V. Montgomery expanded the G.I. Bill. The MGIB replaced the VEAP for those who served after July 1, 1985. This was an entirely voluntary program in which participants could choose to forfeit $100 per month from their first year of pay. In return, eligible veterans received a tuition allowance and a monthly stipend for up to 36 months of eligible training or education. Although the G.I. Bill did not specifically advocate discrimination, it was interpreted differently for blacks than for whites. Historian Ira Katznelson argued that "the law was deliberately designed to accommodate Jim Crow". Because the programs were directed by local, white officials, many veterans did not benefit. Of the 67,000 mortgages insured by the G.I. Bill, fewer than 100 were taken out by non-whites. By 1946, only one fifth of the 100,000 blacks who had applied for educational benefits had registered in college. Furthermore, historically black colleges and universities (HBCUs) came under increased pressure as rising enrollments and strained resources forced them to turn away an estimated 20,000 veterans. HBCUs were already the poorest colleges and served, to most whites, only to keep blacks out of white colleges. HBCU resources were stretched even thinner when veterans’ demands necessitated a shift in the curriculum away from the traditional "preach and teach" course of study offered by the HBCUs. The United States Department of Veterans Affairs (VA), because of its strong affiliation to the all-white American Legion and VFW (Veterans of Foreign Wars), also became a formidable foe to many blacks in search of an education because it had the power to deny or grant the claims of black G.I.s. Additionally, banks and mortgage agencies refused loans to blacks, making the G.I. Bill even less effective for blacks. Congress did not include merchant marine veterans in the original G.I. Bill, even though they are considered military personnel in times of war in accordance with the Merchant Marine Act of 1936. As President Roosevelt signed the G.I. Bill in June 1944 he said: "I trust Congress will soon provide similar opportunities to members of the merchant marine who have risked their lives time and time again during war for the welfare of their country." Now that the youngest veterans are in their 80s, there are efforts to recognize their contributions by giving some benefits to the remaining survivors. In 2007, three different bills related to this issue were introduced in Congress, one of which passed the House of Representatives only. All veteran education programs are found in law in Title 38 of the United States Code. Each specific program is found in its own Chapter in Title 38. Unlike scholarship programs, the MGIB requires a financial commitment from the service member. However, if the benefit is not used, the service member cannot recoup whatever money was paid into the system. In some states, the National Guard does offer true scholarship benefits, regardless of past or current MGIB participation. In 1984, former Mississippi Congressman Gillespie V. “Sonny” Montgomery revamped the G.I. Bill. From 1984 until 2008, this version of the law was called "The Montgomery G.I. Bill". The Montgomery GI Bill — Active Duty (MGIB) states that active duty members forfeit $100 per month for 12 months; if they use the benefits, they receive as of 2012[update] $1564 monthly as a full-time student (tiered at lower rates for less-than-full-time) for a maximum of 36 months of education benefits. This benefit may be used for degree and certificate programs, flight training, apprenticeship/on-the-job training and correspondence courses if the veteran is enrolled full-time. Part-time veteran students receive less, but for a proportionately longer period. This means for every month the veteran received benefits at the half-time, the veterans benefits are only charged for 1/2 of a month. Veterans from the reserve have different eligibility requirements and different rules on receiving benefits (see Ch. 1606, Ch. 1607 and Ch. 33). MGIB may also be used while active, which only reimburses the cost for tuition and fees. Each service has additional educational benefit programs for active duty members. Most delay using MGIB benefits until after separation, discharge or retirement. The "Buy-Up" option, also known as the "kicker", allows active duty members to forfeit up to $600 more toward their MGIB. For every dollar the service member contributes, the federal government contributes $8. Those who forfeit the maximum ($600) will receive, upon approval, an additional $150 per month for 36 months, or a total of $5400. This allows the veteran to receive $4,800 in additional funds ($5400 total minus the $600 contribution to receive it), but not until after leaving active duty. The additional contribution must be made while still on active duty. It is available for G.I. Bill recipients using either Ch. 30 or Ch. 1607, but cannot be extended beyond 36 months if a combination of G.I. Bill programs are used. MGIB benefits may be used up to 10 years from the date of last discharge or release from active duty. The 10-year period can be extended by the amount of time a service member was prevented from training during that period because of a disability or because he/she was held by a foreign government or power. The 10-year period can also be extended if one reenters active duty for 90 days or more after becoming eligible. The extension ends 10 years from the date of separation from the later period. Periods of active duty of less than 90 days qualify for extensions only if one was separated for one of the following: - A service-connected disability - A medical condition existing before active duty For those eligible based on two years of active duty and four years in the Selected Reserve (also known as "call to service"), they have 10 years from their release from active duty, or 10 years from the completion of the four-year Selected Reserve obligation to use MGIB benefits. At this time, service members cannot recoup any monies paid into the MGIB program should it not be utilized. Service members may use GI bill in conjunction with Military Tuition Assistance (MilTA) to help with payments above the MilTA CAP. This will reduce the total benefit available once the member leaves service. - College, business - Technical or vocational courses - Correspondence courses - Apprenticeship/job training - Flight training (usually limited to 60% for Ch. 30, see Ch. 33 for more flight information) Under this bill, benefits may be used to pursue an undergraduate or graduate degree at a college or university, a cooperative training program, or an accredited independent study program leading to a degree. "Chapter 31" is a vocational rehabilitation program that serves eligible active duty servicemembers and veterans with service-connected disabilities. This program promotes the development of suitable, gainful employment by providing vocational and personal adjustment counseling, training assistance, a monthly subsistence allowance during active training, and employment assistance after training. Independent living services may also be provided to advance vocational potential for eventual job seekers, or to enhance the independence of eligible participants who are presently unable to work. In order to receive an evaluation for Chapter 31 vocational rehabilitation and/or independent living services, those qualifying as a "servicemember" must have a memorandum service-connected disability rating of 20% or greater and apply for vocational rehabilitation services. Those qualifying as "veterans" must have received, or eventually receive, an honorable or other-than-dishonorable discharge, have a VA service-connected disability rating of 10% or more, and apply for services. Law provides for a 12-year basic period of eligibility in which services may be used, which begins on the latter of separation from active military duty or the date the veteran was first notified of a service-connected disability rating. In general, participants have 48 months of program entitlement to complete an individual vocational rehabilitation plan. Participants deemed to have a "serious employment handicap" will generally be granted exemption from the 12-year eligibility period and may receive additional months of entitlement as necessary to complete approved plans. The Veterans Educational Assistance Program (VEAP) is available for those who first entered active duty between January 1, 1977, and June 30, 1985, and elected to make contributions from their military pay to participate in this education benefit program. Participants' contributions are matched on a $2 for $1 basis by the Government. This benefit may be used for degree and certificate programs, flight training, apprenticeship/on-the-job training and correspondence courses. Chapter 33 (Post-9/11 G.I. Bill) Congress, in the summer of 2008, approved an expansion of benefits beyond the current G.I. Bill program for military veterans serving since September 11, 2001, originally proposed by Senator Jim Webb. Beginning in August 2009, recipients became eligible for greatly expanded benefits, or the full cost of any public college in their state. The new bill also provides a housing allowance and $1,000 a year stipend for books, among other benefits. The VA announced in September 2008 that it would manage the new benefit itself instead of hiring an outside contractor after protests by veteran's organizations and the American Federation of Government Employees. Veterans Affairs Secretary James B. Peake stated that although it was "unfortunate that we will not have the technical expertise from the private sector," the VA "can and will deliver the benefits program on time." Pending changes to the post-9/11 G.I. Bill In December 2010 Congress passed the Post-9/11 Veterans Education Assistance Improvements Act of 2010. The new law, often referred to as G.I. Bill 2.0, expands eligibility for members of the National Guard to include time served on Title 32 or in the full-time Active Guard and Reserve (AGR). It does not, however, cover members of the Coast Guard Reserve who have served under Title 14 orders performing duties comparable to those performed by National Guard personnel under Title 32 orders. The new law also includes: enrollment periods. In this case if the veteran is full-time, and his or her maximum BAH rate is $1500 per month, then he or she will receive (13/30)x$1500 = $650 for the end of the first period of enrollment, then the veteran will receive (10/30)x$1500 = $500 for the beginning of the second period of enrollment. Effectively, the change in break-pay means the veteran will receive $1150 per month for August instead of $1500 per month. This have a significant impact in December - January BAH payments since most Colleges have 2-4 week breaks. Another change enables active-duty servicemembers and their G.I. Bill-eligible spouses to receive the annual $1,000 book stipend (pro-rated for their rate of pursuit), adds several vocational, certification and OJT options, and removes the state-by-state tuition caps for veterans enrolled at publicly funded colleges and universities. Changes to Ch. 33 also includes a new $17,500 annual cap on tuition and fees coverage for veterans attending Private Colleges and foreign colleges and universities. The Survivors' and Dependents' Educational Assistance Program (DEA) provides education and training opportunities to eligible dependents of veterans who are permanently and totally disabled due to a service-related condition, or who died while on active duty or as a result of a service related condition. The program offers up to 45 months of education benefits. These benefits may be used for degree and certificate programs, apprenticeship, and on-the-job training. Spouses may take correspondence courses The Montgomery G.I. Bill — Selected Reserve (MGIB-SR) program may be available to members of the Selected Reserve, including all military branch reserve components as well as the Army National Guard and Air National Guard. This benefit may be used for degree and certificate programs, flight training, apprenticeship/on-the-job training and correspondence courses. The Reserve Educational Assistance Program (REAP) is available to all reservists who, after September 11, 2001, complete 90 days or more of active duty service "in support of contingency operations." This benefit provides reservists return from active duty with up to 80% of the active duty (Chapter 30) G.I. Bill benefits as long as they remain active participants in the reserves. MGIB comparison chart |Type||Active Duty MGIB Chapter 30||Active Duty Chap 30 Top-up||Post-9/11 G.I. Bill Chapter 33||Voc Rehab Chapter 31||VEAP Chapter 32||DEA Chapter 35||Selected Reserve Chapter 1606||Selected Reserve (REAP) Chapter 1607||Additional Benefits Tuition Assistance||Additional Benefits Student Loan Repayment Program| |Time Limit (Eligibility)||10 yrs from last discharge from active duty.||While on active duty only.||15 yrs from last discharge from active duty.||12 yrs from discharge or notification of service-connected disability, whichever is later. In cases of "extreme disability", the 12-year timeline can be waived.||Entered service for the first time between January 1, 1977, and June 30, 1985; Opened a contribution account before April 1, 1987; Voluntarily contributed from $25 to $2700||While in the Selected Reserve|| While in the Selected Reserve. If separated from Ready Reserve for disability which was not result of willful misconduct, for 10 yrs after date of entitlement. |On the day you leave the Selected Reserve; this include voluntary entry into the IRR.||On the day you leave the Selected Reserve; this include voluntary entry into the IRR.| |Months of Benefits (Full Time)||36 months||36 months||36 months||48 months||1 to 36 months depending on the number of monthly contributions||up to 45 months||36 Months||36 Months||Contingent as long as you serve as a drilling Reservist.||Contingent as long as you serve as a drilling Reservist.| - African Americans and the G.I. Bill - GI Bill Tuition Fairness Act of 2013 (H.R. 357; 113th Congress) - proposed amendments related to in-state versus out-of-state tuition - Post-9/11 Veterans Educational Assistance Act of 2008 - Glenn C. Altschuler and Stuart M. Blumin, The GI Bill: a new deal for veterans (2009) p 118 - Olson, 1973 and see also Bound and Turner 2002 - Stanley, 2003 - Frydl, 2009 - Suzanne Mettler, Soldiers to citizens: The GI Bill and the making of the greatest generation (2005) - Lemieux, Thomas; Card, David (2001). "Education, earnings, and the ‘Canadian GI Bill’". Canadian Journal of Economics/Revue canadienne d'économique 34 (2): 313–344. doi:10.1111/0008-4085.00077. - "The George Washington Uni Profile". DCMilitaryEd.com. Retrieved 2014-01-09. - David Ortiz, Beyond the Bonus March and GI Bill: how veteran politics shaped the New Deal era (2013) p xiii - Ortiz, Beyond the Bonus March and GI Bill: how veteran politics shaped the New Deal era (2009) p xiii - The GI BILL's History: Born Of Controversy: The GI Bill Of Rights - James E. McMillan (2006). Ernest W. McFarland: Majority Leader of the United States Senate, Governor and Chief Justice of the State of Arizona : a biography. Sharlot Hall Museum Press. p. 113. ISBN 978-0-927579-23-0. - THE CONGRESSIONAL RESEARCH SERVICE (2004), A CHRONOLOGY OF HOUSING LEGISLATION AND SELECTED EXECUTIVE ACTIONS, 1892-2003, U.S. Government Printing Office - Jackson, Kenneth T. (1985). Crabgrass Frontier: The Suburbanization of the United States. New York: Oxford University Press. p. 206. - See The Historical Development of Veterans' Benefits in the United States: A Report on Veterans' Benefits in the United States by the President's Commission on Veterans' Pensions, 84th Congress, 2d Session, House Committee Print 244, Staff Report No. 1, May 9, 1956, pp. 160-161. Also see "The New GI Bill: Who Gets What," Changing Times (May 1953), 22 and Congress and the Nation, 1945-1964: A Review of Government and Politics in the Postwar Years, Washington, D.C.: Congressional Quarterly Service, 1965, 1348. - Lyndon B. Johnson, "Remarks Upon Signing the 'Cold War GI Bill'" (1966) at The American Presidency Project - Kotz, Nick (28 August 2005). "Review: 'When Affirmative Action Was White': Uncivil Rights". New York Times. Retrieved 2 August 2015. - Katznelson, Ira (2006). When affirmative action was white : an untold history of racial inequality in twentieth-century America ([Norton pbk ed.] ed.). New York: W.W. Norton. ISBN 978-0393328516. - Herbold, Hilary (Winter 1994). "Never a Level Playing Field: Blacks and the GI Bill". The Journal of Blacks in Higher Education (6): 107. doi:10.2307/2962479. - Herbold, Hilary (Winter 1994). "Never a Level Playing Field: Blacks and the GI Bill". The Journal of Blacks in Higher Education (6): 104–108. doi:10.2307/2962479. - Howard Johnson, "The Negro Veteran Fights for Freedom!" Political Affairs, May 1947, p. 430. - Belated Thank You to the Merchant Mariners of World War II Act of 2007 - GI-BILL History - Buy-Up Program - Davenport, Christian, "Expanded GI Bill Too Late For Some", Washington Post, October 21, 2008, p. 1. - More Details on GI Bill 2.0 - Montgomery G.I. Bill Guidelines for Active Duty (MGIB) - Montgomery G.I. Bill - Active Duty - (U.S. Department of Veterans Affairs) - Top-up Tuition Assistance - Military Veteran Education Benefits - G.I. Bill Veteran Resources - Tuition Assistance Top-up - (U.S. Department of Veterans Affairs) - VEAP - Military Veteran Education Benefits - G.I. Bill Veteran Resources - Veterans Educational Assistance Program (VEAP) - (U.S. Department of Veterans Affairs) - Survivors' and Dependents' Educational Assistance Program - (U.S. Department of Veterans Affairs) - Montgomery G.I. Bill Guidelines for Selected Reserve (MGIB-SR) - MGIB-SR General Information - (U.S. Department of Veterans Affairs) - Payment Rates - Payment Rates - Payment Rates - Payment Rates - Bennett, Michael J. When Dreams Came True: The G.I. Bill and the Making of Modern America (New York: Brassey’s Inc., 1996) - Bound, John, and Sarah Turner. "Going to War and Going to College: Did World War II and the G.I. Bill Increase Educational Attainment for Returning Veterans?" Journal of Labor Economics Vol. 20, No. 4 (October 2002), pp. 784–815 in JSTOR - Boulton, Mark. Failing our Veterans: The G.I. Bill and the Vietnam Generation (NYU Press, 2014) - Keane, Jennifer. Doughboys, the Great War and the Remaking of America (Johns Hopkins University Press, 2001) - Frydl, Kathleen. The G.I. Bill (Cambridge University Press, 2009) - Humes, Edward (2006). Over Here: How the G.I. Bill Transformed the American Dream. Harcourt. ISBN 0-15-100710-1. - Mettler, Suzanne. Soldiers to Citizens: The G.I. Bill and the Making of the Greatest Generation (Oxford University Press, 2005). online; excerpt - Olson, Keith. "The G. I. Bill and Higher Education: Success and Surprise," American Quarterly Vol. 25, No. 5 (December 1973) 596-610. in JSTORin JSTOR - Olson, Keith, The G.I. Bill, The Veterans, and The Colleges (Lexington: University Press of Kentucky, 1974) - Ross, David B. Preparing for Ulysses: Politics and Veterans During World War II (Columbia University Press, 1969). - Stanley, Marcus (2003). "College Education and the Midcentury GI Bills". The Quarterly Journal of Economics 118 (2): 671–708. doi:10.1162/003355303321675482. - Van Ells, Mark D. To Hear Only Thunder Again: America's World War II Veterans Come Home. Lanham, MD: Lexington Books, 2001. - GI Bill Forum[dead link] - The American Legion's MyGIBill.org - The Department of Veteran Affairs' GI Bill website - Central Committee for Conscientious Objectors analysis of the MGIB - Education Fact Sheet for Guard & Reserve Members - Education Benefits Available by States - Web-Enable Education Benefits System - GI Bill top up program
https://en.wikipedia.org/wiki/G._I._Bill_of_Rights
4.1875
|Part of the Politics series| A primary election is an election that narrows the field of candidates before an election for office. Primary elections are one means by which a political party or a political alliance nominates candidates for an upcoming general election or by-election. Other methods of selecting candidates include caucuses, conventions, and nomination meetings. Historically, Canadian political parties chose their candidates through nominating conventions held by constituency riding associations. Canadian party leaders are elected at leadership conventions, although some parties have abandoned this practice in favor of one member, one vote systems. - 1 Types - 2 Primaries in the United States - 3 Primaries in Europe - 4 Primaries in Canada - 5 Primaries worldwide - 6 See also - 7 Notes - 8 References - 9 External links Where primary elections are organized by parties, not the administration, two types of primaries can generally be distinguished: - Closed primary. (synonyms: internal primaries, party primaries) In the case of closed primaries, internal primaries, or party primaries, only party members can vote. - Open primary. All voters can take part in an open primary and may cast votes on a ballot of any party. The party may require them to express their support to the party's values and pay a small contribution to the costs of the primary. In the United States, other types can be differentiated: - Closed primary. People may vote in a party's primary only if they are registered members of that party prior to election day. Independents cannot participate. Note that because some political parties name themselves independent, the terms "non-partisan" or "unaffiliated" often replace "independent" when referring to those who are not affiliated with a political party. Thirteen states — Connecticut, Delaware, Florida, Kentucky, Maine, Nebraska, Nevada, New Jersey, New Mexico, New York, Oklahoma, Oregon, Pennsylvania, and South Dakota — have closed primaries. - Semi-closed. As in closed primaries, registered party members can vote only in their own party's primary. Semi-closed systems, however, allow unaffiliated voters to participate as well. Depending on the state, independents either make their choice of party primary privately, inside the voting booth, or publicly, by registering with any party on Election Day. Thirteen states — Alaska, Arizona, Colorado, Iowa, Kansas, Massachusetts, New Hampshire, North Carolina, Rhode Island, Utah, West Virginia, and Wyoming — have semi-closed primaries that allow voters to register or change party preference on election day. - Open primary. A registered voter may vote in any party primary regardless of his own party affiliation. When voters do not register with a party before the primary, it is called a pick-a-party primary because the voter can select which party's primary he or she wishes to vote in on election day. Because of the open nature of this system, a practice known as raiding may occur. Raiding consists of voters of one party crossing over and voting in the primary of another party, effectively allowing a party to help choose its opposition's candidate. The theory is that opposing party members vote for the weakest candidate of the opposite party in order to give their own party the advantage in the general election. An example of this can be seen in the 1998 Vermont senatorial primary with the nomination of Fred Tuttle as the Republican candidate in the general election. - Semi-open. A registered voter need not publicly declare which political party's primary that they will vote in before entering the voting booth. When voters identify themselves to the election officials, they must request a party's specific ballot. Only one ballot is cast by each voter. In many states with semi-open primaries, election officials or poll workers from their respective parties record each voter's choice of party and provide access to this information. The primary difference between a semi-open and open primary system is the use of a party-specific ballot. In a semi-open primary, a public declaration in front of the election judges is made and a party-specific ballot given to the voter to cast. Certain states that use the open-primary format may print a single ballot and the voter must choose on the ballot itself which political party's candidates they will select for a contested office. - Blanket primary. A primary in which the ballot is not restricted to candidates from one party. - Nonpartisan blanket primary. A primary in which the ballot is not restricted to candidates from one party, where the top two candidates advance to the general election regardless of party affiliation. Louisiana has famously operated under this system, which has been nicknamed the "jungle primary." California has used a nonpartisan blanket primary since 2012 after passing Proposition 14 in 2010, and the state of Washington has used a nonpartisan blanket primary since 2008. Primaries in the United States The United States is one of few countries to select candidates through popular vote in a primary election system; most countries rely on party leaders to vet candidates, as was previously the case in the U.S. In modern politics, primary elections have been described as a significant vehicle for taking decision-making from political insiders to the voters, though this is disputed by select political science research. The selection of candidates for federal, state, and local general elections takes place in primary elections organized by the public administration for the general voting public to participate in for the purpose of nominating the respective parties' official candidates; state voters start the electoral process for governors and legislators through the primary process, as well as for many local officials from city councilors to county commissioners. The candidate who moves from the primary to be successful in the general election takes public office. Primaries can be used in nonpartisan elections to reduce the set of candidates that go on to the general election (qualifying primary). (In the U.S., many city, county and school board elections are non-partisan.) Generally, if a candidate receives more than 50% of the vote in the primary, he or she is automatically elected, without having to run again in the general election. If no candidate receives a majority, twice as many candidates pass the primary as can win in the general election, so a single seat election primary would allow the top two primary candidates to participate in the general election following. When a qualifying primary is applied to a partisan election, it becomes what is generally known as a blanket or Louisiana primary: typically, if no candidate wins a majority in the primary, the two candidates receiving the highest pluralities, regardless of party affiliation, go on to a general election that is in effect a run-off. This often has the effect of eliminating minor parties from the general election, and frequently the general election becomes a single-party election. Unlike a plurality voting system, a run-off system meets the Condorcet loser criterion in that the candidate that ultimately wins would not have been beaten in a two-way race with every one of the other candidates. Because many Washington residents were disappointed over the loss of their blanket primary, which the Washington State Grange helped institute in 1935, the Grange filed Initiative 872 in 2004 to establish a blanket primary for partisan races, thereby allowing voters to once again cross party lines in the primary election. The two candidates with the most votes then advance to the general election, regardless of their party affiliation. Supporters claimed it would bring back voter choice; opponents said it would exclude third parties and independents from general election ballots, could result in Democratic or Republican-only races in certain districts, and would in fact reduce voter choice. The initiative was put to a public vote in November 2004 and passed. On July 15, 2005, the initiative was found unconstitutional by the U.S. District Court for the Western District of Washington. The U.S. Supreme Court heard the Grange's appeal of the case in October 2007. In March 2008, the Supreme Court upheld the constitutionality the Grange-sponsored Top 2 primary, citing a lack of compelling evidence to overturn the voter-approved initiative. In elections using voting systems where strategic nomination is a concern, primaries can be very important in preventing "clone" candidates that split their constituency's vote because of their similarities. Primaries allow political parties to select and unite behind one candidate. However, tactical voting is sometimes a concern in non-partisan primaries as members of the opposite party can strategically vote for the weaker candidate in order to face an easier general election. In California, under Proposition 14 (Top Two Candidates Open Primary Act), a voter-approved referendum, in all races except for that for U.S. President and county central committee offices, all candidates running in a primary election regardless of party will appear on a single primary election ballot and voters may vote for any candidate, with the top two vote-getters overall move on to the general election regardless of party. The effect of this is that it will be possible for two Republicans or two Democrats to compete against each other in a general election if those candidates receive the most primary-election support. As a result of a federal court decision in Idaho Republican Party v. Ysursa, the 2011 Idaho Legislature passed House Bill 351 implementing a closed primary system. In the United States, Iowa and New Hampshire have drawn attention every four years because they hold the first caucus and primary election, respectively, and often give a candidate the momentum to win their party's nomination. A criticism of the current presidential primary election schedule is that it gives undue weight to the few states with early primaries, as those states often build momentum for leading candidates and rule out trailing candidates long before the rest of the country has even had a chance to weigh in, leaving the last states with virtually no actual input on the process. The counterargument to this criticism, however, is that, by subjecting candidates to the scrutiny of a few early states, the parties can weed out candidates who are unfit for office. The Democratic National Committee (DNC) proposed a new schedule and a new rule set for the 2008 Presidential primary elections. Among the changes: the primary election cycle would start nearly a year earlier than in previous cycles, states from the West and the South would be included in the earlier part of the schedule, and candidates who run in primary elections not held in accordance with the DNC's proposed schedule (as the DNC does not have any direct control over each state's official election schedules) would be penalized by being stripped of delegates won in offending states. The New York Times called the move, "the biggest shift in the way Democrats have nominated their presidential candidates in 30 years." Of note regarding the DNC's proposed 2008 Presidential primary election schedule is that it contrasted with the Republican National Committee's (RNC) rules regarding Presidential primary elections. "No presidential primary, caucus, convention, or other meeting may be held for the purpose of voting for a presidential candidate and/or selecting delegates or alternate delegates to the national convention, prior to the first Tuesday of February in the year in which the national convention is held." In 2020, this date is February 4. Candidates for U.S. President who seek their party's nomination participate in primary elections run by state governments, or caucuses run by the political parties. Unlike an election where the only participation is casting a ballot, a caucus is a gathering or "meeting of party members designed to select candidates and propose policies." Both primaries and caucuses are used in the Presidential nomination process, beginning in January or February and culminating in the late-summer political party conventions. Candidates may earn convention delegates from each state primary or caucus. Sitting presidents generally do not face serious competition from their party. While it is clear that the closed/semi-closed/semi-open/open classification commonly used by scholars studying primary systems does not fully explain the highly nuanced differences seen from state to state, still, it is very useful and has real-world implications for the electorate, election officials, and the candidates themselves. As far as the electorate is concerned, the extent of participation allowed to weak partisans and independents depends almost solely on which of the aforementioned categories best describes their state's primary system. Clearly, open and semi-open systems favor this type of voter, since they can choose which primary they vote in on a yearly basis under these models. In closed primary systems, true independents are, for all practical purposes, shut out of the process. This classification further affects the relationship between primary elections and election commissioners and officials. The more open the system, the greater the chance of raiding, or voters voting in the other party's primary in hopes of getting a weaker opponent chosen to run against a strong candidate in the general election. Raiding has proven stressful to the relationships between political parties, who feel cheated by the system, and election officials, who try to make the system run as smoothly as possible. Perhaps the most dramatic effect this classification system has on the primary process is its influence on the candidates themselves. Whether a system is open or closed dictates the way candidates run their campaigns. In a closed system, from the time a candidate qualifies to the day of the primary, he must cater to strong partisans, who tend to lean to the extreme ends of the ideological spectrum. In the general election, on the other hand, the candidate must move more towards the center in hopes of capturing a plurality. Daniel Hannan, a British politician and Member of the European Parliament, claimed, "Open primaries are the best idea in contemporary politics. They shift power from party hierarchs to voters, from Whips to backbenchers and from ministers to Parliament. They serve to make legislatures more diverse and legislators more independent." Primaries in Europe In Europe, primaries are not organized by the public administration but by parties themselves. Legislation is mostly silent on primaries. The main reason to this is that the voting method used to form governments, be it proportional representation or two-round systems, lessens the need for an open primary. Governments are not involved in the process; however, parties may need their cooperation, notably in the case of an open primary, e.g. to obtain the electoral roll, or to cover the territory with a sufficient number of polling stations. Whereas closed primaries are rather common within many European countries, few political parties in Europe already opted for open primaries. Parties generally organise primaries to nominate the party leader (leadership election). The underlying reason for that is that most European countries are parliamentary democracies. National governments are derived from the majority in the Parliament, which means that the head of the government is generally the leader of the winning party. France is one exception to this rule. Closed primaries happen in many European countries, while open primaries have so far only occurred in the socialist and social-democratic parties in Greece and Italy, whereas the France's Socialist Party organised the first open primary in France in October 2011. One of the more recent developments is organizing primaries on the European level. European parties that organized primaries so far were European Green Party (EGP) and Party of European Socialists (PES). |This section requires expansion. (December 2012)| In Italy, the first open primaries took place on 16 October 2005. It led to the designation of Romano Prodi as leader of the Olive Tree coalition, which gathered several center and left-wing parties, for the legislative elections of the 9th and 10th April 2006. Several parties of the coalition decided to form a single major Centre-left party:The Democratic Party, which uses the primary elections to choose its candidate to the premiership. In France, elections follow a two-round system. In the first round, all candidates who have qualified (for example, by obtaining a minimal number of signatures of support from elected officials) are on the ballot. In practice, each candidate usually represents a political party, large or small. In the second round, held two weeks later, the top two candidates run against each other, with the candidates from losing parties usually endorsing one of the two finalists. The means by which the candidate of an established political party is selected has evolved. Until 2012, none of the six Presidents elected through direct election faced a competitive internal election. - In 2007, Sarkozy, President of the UMP, organized an approval "primary" without any opponent. He won by 98% and made his candidacy speech thereafter. - On the left however, the Socialist Party, which helped Mitterrand gain the Presidency for 14 years, has been plagued by internal divisions since the latter departed from politics. Rather than forming a new party, which is the habit on the right-wing, the party started to elect its nominee internally. - A first try in 1995: Lionel Jospin won the nomination three months before the election. He lost in the run-off to Chirac. Later in 2002, although the candidacy of then-PM Jospin was undisputed in his party, each of the 5 left-wing parties of the government he led sent a candidate... paving the way for a loss of all five. - The idea made progress coming near the 2007 race, once the referendum on a European constitution was over. The latter showed strong ideological divisions within the left-wing spectrum, and the Socialist Party itself. This prevented the possibility of a primary spanning the whole left-wing, that would give its support to a presidential candidate. Given that no majority supported either a leader or a split, a registration campaign, enabling membership for only 20 euros, and a closed primary was organized, which Ségolène Royal won. She qualified to the national run-off that she lost to Sarkozy. - In 2011, the Socialist Party decided to organise the first ever open primary in France to pick up the Socialist party and the Radical Party of the Left nominee for the 2012 presidential election. Inspired by the 2008 U.S. primaries, it was seen as a way to reinvigorate the party. The idea was first proposed by Terra Nova, an independent left-leaning think tank, in a 2008 report. It was also criticized for going against the nature of the regime. The open primary was not state-organized : the party took charge of all the electoral procedures, planning to set up 10,000 voting polls. All citizens on the electoral rolls, party members of Socialist party and the Radical Party of the Left, and members of the parties' youth organisation (MJS and JRG), including minors of 15 to 18 years old, were entitled to vote in exchange of a euro to cover the costs. More than 3 million people participated in this first open primary, which was considered a success, and former party leader François Hollande was designated the Socialist and Radical candidate for the 2012 presidential election. - Other parties organize membership primaries to choose their nominee, such as Europe Ecologie – Les Verts (EE-LV) (2006, 2011), and the French Communist Party in 2011. - At the local level, membership primaries are the rule for Socialist Party's candidates, but these are usually not competitive. In order to tame potential feud in his party, and prepare the ground for a long campaign, Sarkozy pushed for a closed primary in 2006 to designate the UMP candidate for the 2008 election of the Mayor of Paris. Françoise de Panafieu was elected in a four-way race. However, she did not clinch the mayorship two years later. For the 2010 general election, the Conservative Party used open primaries to select two candidates. Further open primaries were used to select some Conservative candidates for the 2015 general election, and there are hopes other parties may nominate future candidates in this way. - Only three parties organised an open primary: France (PS), Greece (ΠΑΣΟΚ), Italy (PD) - Closed primary happened in nine parties: Belgium (sp.a, PS), Cyprus (ΕΔΕΚ), Denmark (SD), France (PS) until 2011, Ireland (LP), Netherlands (PvdA), Portugal (PS), United Kingdom (Labour) The case of UK's Labour party leadership election is specific, as three electoral colleges, each accounting for one third of the votes, participate in this primary election: Labour members of Parliament and of the European Parliament, party members and members of affiliated organisations such as trade unions. - The designation of the party leader was made by the party's congress in the eighteen remaining parties: Austria (SPÖ), Bulgaria (БСП), Czech Republic (ČSSD), Estonia (SDE), Finland (SDP), Germany (SPD), Hungary (MSZP), Latvia (LSDSP), Lithuania (SDPL), Luxembourg (LSAP), Malta (LP), Poland (SLD, UP), Romania (PSD), Slovakia (SMER-SD), Slovenia (SD), Spain (PSOE), Sweden (SAP), United-Kingdom / Northern Ireland (SDLP) Indeed, the Lisbon treaty, which entered into force in December 2009, lays down that the outcome of elections to the European Parliament must be taken into account in selecting the President of the Commission; the Commission is in some respects the executive branch of the EU and so its president can be regarded as the EU prime minister. Parties are therefore encouraged to designate their candidates for Commission president ahead of the next election in 2014, in order to allow voters to vote with a full knowledge of the facts. Many movements are now asking for primaries to designate these candidates. - Already in April 2004, a former British conservative MEP, Tom Spencer, advocated for American-style primaries in the European People's Party: "A series of primary elections would be held at two-week intervals in February and March 2009. The primaries would start in the five smallest countries and continue every two weeks until the big five voted in late March. To avoid swamping by the parties from the big countries, one could divide the number of votes cast for each candidate in each country by that country's voting weight in the Council of Ministers. Candidates for the post of president would have to declare by 1 January 2009." - In July 2013 European Green Party (EGP) announced that it would run a first ever European-wide open primary as the preparation for the European elections in 2014. It was be open to all citizens of the EU over the age of 16 who "supported green values" They elected two transnational candidates who were to be the face of the common campaign of the European green parties united in the EGP, and who also were their candidates for European Commission president. - Following the defeat of the Party of European Socialists during the European elections of June 2009, the PES Congress that took place in Prague in December 2009 made the decision that PES would designate its own candidate before the 2014 European elections. A Campaign for a PES primary was then launched by PES supporters in June 2010, and it managed to convince the PES Council meeting in Warsaw in December 2010 to set up Working Group "Candidate 2014" in charge of proposing a procedure and timetable for a "democratic" and "transparent" designation process "bringing on board all our parties and all levels within the parties". The European think-tank Notre Europe also evokes the idea that European political parties should designate their candidate for Vice-president / High representative of the Union for foreign affairs. This would lead European parties to have "presidential tickets" on the American model. Finally, the European Parliament envisaged to introduce a requirement for internal democracy in the regulation on the statute of European political parties. European parties would therefore have to involve individual members in the major decisions such as designating the presidential candidate. Primaries in Canada As in Europe, primary elections in Canada are not organized by the public administration but by parties themselves. Political parties participate in federal elections to the House of Commons, in legislative elections in all ten provinces, and in Yukon. (The legislatures and elections in the Northwest Territories and Nunavut are non-partisan.) Typically, in the months before an anticipated general election, local riding associations of political parties in each electoral district will schedule and announce a Nomination Meeting (similar to a nominating caucus in the United States). Would-be candidates will then file nomination papers with the association, and usually will devote time to solicit existing party members, and to sign up new party members who will also support them at the nomination meeting. At the meeting, typically each candidate will speak, and then members in attendance will vote. The voting system most often used is an exhaustive ballot system; if no candidate has over 50% of the votes, the candidate with the lowest number of votes will be dropped and another ballot will be held. Also, other candidates who recognize that they will probably not win may withdraw between ballots, and may "throw their support" to (encourage their own supporters to vote for) another candidate. After the nomination meeting, the candidate and the association will obtain approval from party headquarters, and file the candidate's official nomination papers and necessary fees and deposits with Elections Canada or the provincial/territorial election commissions as appropriate. At times, party headquarters may overturn an association's chosen candidate; for example, if any scandalous information about the candidate comes to light after the nomination. A party headquarters may also "parachute" a prominent candidate into an easy-to-win riding, removing the need to have a nomination meeting. These situations only come up infrequently, as they tend to cause disillusionment among a party's supporters. Canadian political parties also organize their own elections of party leaders. Not only will the party leader run for a seat in their own chosen riding, they will also become Prime Minister (in a federal election) or Premier (in a province or territory) should their party win the most seats. If the party wins the second-most seats, the party leader will become Leader of the Official Opposition; if the party comes third or lower, the leader will still be recognized as the leader of their party, and will be responsible for co-ordinating the activities and affairs of their party's caucus in the legislature. In the past, Canadian political parties chose party leaders through the votes of delegates to a Leadership Convention. Local riding associations would choose delegates, usually in a manner similar to how they would choose a candidate for election. These delegates typically said explicitly which leadership candidate they would support. Those delegates, as well as other delegates (e.g. sitting party members of Parliament or the legislature, or delegates from party-affiliated organizations such as labor unions in the case of the New Democratic Party), would then vote, again using the exhaustive ballot method, until a leader was chosen. Lately, Canada's major political parties have moved to "one member, one vote" systems for their federal leadership elections. A leadership convention is still scheduled, but all party members have a chance to vote for the new leader. Typically, members may vote either in person as a delegate to the convention, online as they watch ballot-by-ballot results on the Internet or on television, or through a mail-in preferential ballot (handled by an "instant runoff" method). This method was used in the 2012 NDP leadership convention which chose Tom Mulcair as federal party leader. When the Liberal Party chose Justin Trudeau as party leader in its leadership convention in 2013, they used a similar process, but only used online preferential voting for members not present at the convention and did not use mail-in ballots. As well, they scaled all members' votes such that each of the 308 riding associations' votes would be equal, notwithstanding how many or how few members voted in each riding. - United States presidential primary. - Primary elections in Italy. - Argentine general election, 2011, Argentine legislative election, 2013, Argentine general election, 2015 - Uruguay, since 1999. - United New Democratic Party (South Korea, 2007). - United Kingdom - Armenia. In an innovation on 2007 November 24 and 25, one political party conducted a non-binding Armenia-wide primary election. The party, the Armenian Revolutionary Federation, invited the public to vote to advise the party which of two candidates they should formally nominate for President of Armenia in the subsequent official election. What characterized it as a primary instead of a standard opinion poll was that the public knew of the primary in advance, all eligible voters were invited, and the voting was by secret ballot. "Some 68,183 people . . . voted in make-shift tents and mobile ballot boxes . . ." - Colombia In 2006, the Liberal Party and the socialist Democratic Pole hold primary elections, electing Horacio Serpa as liberal candidate and Carlos Gaviria as candidate of the Democratic Pole. For 2010 presidential electiones, four parties held primary elections: The Liberal Party elected former minister Rafael Pardo as candidate, the Democratic Pole elected senator Gustavo Petro, the Conservative Party chose ambassador Noemi Sanin and the Green Party chose former mayor of Bogota Antanas Mockus. - Costa Rica, the three main political forces National Liberation Party, Social Christian Unity Party and Citizens' Action Party have all run primary elections several times. - Republic of China (Taiwan): The Democratic Progressive Party selects all its candidates via opinion polls. The candidate with the highest poll rating will be nominated. The KMT selects candidates using a combination of opinion polls (worth 70%) and primary elections (worth 30%). - Sore-loser law, which states that the loser in a primary election cannot thereafter run as an independent in the general election - Thomas W. Williams (Los Angeles), opposed the direct primary, 1915 - Smith, Kevin B. (2011). Governing States and Localities. Washington, D.C.: CQ Press. pp. 189–190. ISBN 978-1-60426-728-0. - "Closed Primary Election Law & Legal Definition". USLegal.com. Retrieved 2012-11-07. - "Open Primary Law & Legal Definition". USLegal.com. Retrieved 2012-11-07. - Bowman, Ann (2012). State and Local Government: The Essentials. Boston, MA: Wadsworth. p. 77. - Dye, Thomas R. (2009). Politics in States and Communities. New Jersey: Pearson Education. p. 152. - (PDF) http://www.sos.wa.gov/_assets/elections/HistoryofWashingtonStatePrimarySystems.pdf. Missing or empty - Ginsberg, Benjamin (2011). We the People: An Introduction to American Politics. New York: W.W. Norton & Co. p. 349. - Cohen, Marty. The Party Decides: Presidential Nominations before and after Reform. Chicago: University of Chicago, 2008. - Bowman, Ann (2006). State and Local Government: The Essentials. Boston, MA: Houghton Mifflin Co. pp. 75–77. - "Blanket Primary Law & Legal Definition". USLegal.com. Retrieved 2012-11-07. - "WASHINGTON STATE GRANGE v. WASHINGTON STATE REPUBLICAN PARTY". 18 March 2008. U.S. Supreme Court. Retrieved 22 April 2012. - California Secretary of State - McKinley, Jesse (June 9, 2010). "Calif. Voting Change Could Signal Big Political Shift". The New York Times. - Idaho Voter's Guide (PDF) http://www.idahovotes.gov/VoterGuide/2012_Voter_Guide_English.pdf?hp. Missing or empty - "E-votong? Not ready yet.". oregonlive.com. Retrieved 2010-08-11. - "Democrats Set Primary Calendar and Penalties", New York Times, August 20, 2006 - "GOP.com". Gop.com. Archived from the original on 30 November 2008. Retrieved 2009-01-30. - Bardes, Barbara (2012). American Government and Politics Today: The Essentials 2011-12 Edition. Boston, MA: Wadsworth. p. 300. - "Do open primaries favour plutocrats and extremists?". London: Blogs.telegraph.co.uk. 2010-08-29. Retrieved 2010-10-31. - "GP wins Tory 'open primary' race". BBC News. August 4, 2009. Retrieved May 22, 2010. - "Tories test the mood in Totnes". BBC News. August 4, 2009. Retrieved May 22, 2010. - (English) Article by Tom Spencer in European Voice American-style primaries would breathe life into European elections 22.04.2004 - (English) Website of the Campaign for a PES primary - (English) Resolution of the PES Council in Warsaw, A democratic and transparent process for designating the PES candidate for the European Commission Presidency, 2 December 2010 - (French) Les Brefs de Notre Europe, Des réformes institutionnelles à la politisation – Ou comment l’Union européenne du Traité de Lisbonne peut intéresser ses citoyens, October 2010 - (English) European Parliament press release, Constitutional Affairs Committee discusses pan-European political parties, 31 January 2011 - Cross, William (2006). "Chapter 7: Candidate Nomination in Canada's Political Parties". In Jon H. Pammett and Christopher Dornan. The Canadian Federal Election of 2006 (PDF). Toronto: Dundurn Press. pp. 171–195. ISBN 978-1-55002-650-4. - Horizon Armenian Weekly, English Supplement, 2007 December 3, page E1, "ARF conducts 'Primaries' ", a Yerkir agency report from the Armenian capital, Yerevan. - Bibby, John, and Holbrook, Thomas. 2004. Politics in the American States: A Comparative Analysis, 8th Edition. Ed. Virginia Gray and Russell L. Hanson. Washington D.C.: CQ Press, pp. 62–100. - Brereton Charles. First in the Nation: New Hampshire and the Premier Presidential Primary. Portsmouth, NH: Peter E. Randall Publishers, 1987. - The Center for Election Science. Electoral System Summary - Hershey, Majorie. Political Parties in America, 12th Edition. New York: Pearson Longman, 2007. pp. 157–73. - Kendall, Kathleen E. Communication in the Presidential Primaries: Candidates and the Media, 1912–2000 (2000) - Primaries: Open and Closed - Palmer, Niall A. The New Hampshire Primary and the American Electoral Process (1997) - Scala, Dante J. Stormy Weather: The New Hampshire Primary and Presidential Politics (2003) - Ware, Alan. The American Direct Primary: Party Institutionalization and Transformation in the North (2002), the invention of primaries around 1900
https://en.wikipedia.org/wiki/Primary_elections
4.28125
Could human life end with an asteroid? Asteroid impacts have played an enormous role in creating Earth and in altering the course of the evolution of life. It is most likely that an asteroid impact brought the end of the dinosaurs and many other lifeforms at the end of the Mesozoic. Could one asteroid do it again? Asteroids are very small, rocky bodies that orbit the Sun. "Asteroid" means "star-like," and in a telescope, asteroids look like points of light, just like stars. Asteroids are irregularly shaped because they do not have enough gravity to become round. They are also too small to maintain an atmosphere, and without internal heat they are not geologically active ( Figure below ). Collisions with other bodies may break up the asteroid or create craters on its surface. In 1991, Asteroid 951 Gaspra was the first asteroid photographed at close range. Gaspra is a medium-sized asteroid, measuring about 19 by 12 by 11 km (12 by 7.5 by 7 mi). Asteroid impacts have had dramatic impacts on the shaping of the planets, including Earth. Early impacts caused the planets to grow as they cleared their portions of space. An impact with an asteroid about the size of Mars caused fragments of Earth to fly into space and ultimately create the Moon. Asteroid impacts are linked to mass extinctions throughout Earth's history. The Asteroid Belt Hundreds of thousands of asteroids have been discovered in our solar system. They are still being discovered at a rate of about 5,000 new asteroids per month. The majority of the asteroids are found in between the orbits of Mars and Jupiter, in a region called the asteroid belt , as shown in Figure below . Although there are many thousands of asteroids in the asteroid belt, their total mass adds up to only about 4% of Earth’s Moon. The white dots in the figure are asteroids in the main asteroid belt. Other groups of asteroids closer to Jupiter are called the Hildas (orange), the Trojans (green), and the Greeks (also green). Scientists think that the bodies in the asteroid belt formed during the formation of the solar system. The asteroids might have come together to make a single planet, but they were pulled apart by the intense gravity of Jupiter. More than 4,500 asteroids cross Earth’s orbit; they are near-Earth asteroids . Between 500 and 1,000 of these are over 1 km in diameter. Any object whose orbit crosses Earth’s can collide with Earth, and many asteroids do. On average, each year a rock about 5–10 m in diameter hits Earth ( Figure below ). Since past asteroid impacts have been implicated in mass extinctions, astronomers are always on the lookout for new asteroids, and follow the known near-Earth asteroids closely, so they can predict a possible collision as early as possible. A painting of what an asteroid a few kilometers across might look like as it strikes Earth. Scientists are interested in asteroids because they are representatives of the earliest solar system ( Figure below ). Eventually asteroids could be mined for rare minerals or for construction projects in space. A few missions have studied asteroids directly. NASA’s DAWN mission will be exploring asteroid Vesta in 2011 and 2012 and dwarf planet Ceres in 2015. The NEAR Shoemaker probe took this photo as it was about to land on 433 Eros in 2001. KQED: Asteroid Hunters Thousands of objects, including comets and asteroids, are zooming around our solar system; some could be on a collision course with Earth. QUEST explores how these Near Earth Objects are being tracked and what scientists are saying should be done to prevent a deadly impact. Learn more at: http://science.kqed.org/quest/video/asteroid-hunters/ - Asteroids are small rocky bodies that orbit the Sun and sometimes strike Earth. - Most asteroids reside in the asteroid belt, between Mars and Jupiter. - Near-earth asteroids are the ones most likely to strike Earth, and scientists are always looking out for a large one that may impact our planet and cause problems. Use these resources to answer the questions that follow. 1. What are asteroids? 2. Where are most asteroids found? Go to the Asteroid Table. 3. What is the largest asteroid and when was it discovered? 4. What has NEOWISE determined? 5. How many of the asteroids have been cataloged? 6. How are the asteroids detected? 7. What type of telescope is being used? 1. What is the reason there is a belt of asteroids between Mars and Jupiter? 2. Why do scientists look for asteroids that might strike our planet? 3. What do scientists hope to learn from missions to visit asteroids?
http://www.ck12.org/earth-science/Asteroids/lesson/Asteroids/r16/
4.3125
Understanding Reading Assessment The information on this page is provided to help parents understand how children's numeracy is assessed. How is Reading assessed? Progress in reading is assessed according to the extent to which pupils are gaining a deep understanding of the content taught for their year. Teachers assess against 2 key areas; reading words and reading comprehension. The national curriculum for reading aims to enable pupils to: - become a fluent reader using a variety of strategies to read words e.g. using phonic knowledge to recognise and blend phonemes, speedily recognise high frequency words and apply their growing knowledge of root words, prefixes and suffixes. - understand and comprehend what has been listened to and read e.g. discussing the significance of the title and events, and predicting what might happen or infer feelings. Age related expectations Children will be assessed against the objectives for their year group, set out in the National Curriculum. - We will use the terms, 'emerging,' 'developing' and 'secure' to track their progress against these targets. - At the end of the year, most children will be expected to be 'secure.' This means they will have reached age related expectations. - Some children may not reach this stage and this will be reported accordingly. Some children may have 'mastered' these expectations. Please click on the links below to view the key objectives for your child's year group. These targets are written in 'child speak' to enable the children to understand the skills they need to develop. Key objectives - Year 1 Key objectives - Year 2 Key objectives - Year 3 and 4 Key objectives - Year 5 and 5 When is reading assessed? Teachers assess children's reading on a regular basis, using oral and some written work to gather information. Each half term, teachers gather each child's notes from reading sessions and where appropriate written work and perform a more formal assessment. This is used to support and challenge reading sessions during the following term. A baseline assessment is completed during the first term in reception. The EYFS baseline helps inform planning and teaching and is a method of measuring progress made from EYFS through to Key Stage 2. Towards the end of the summer term, children may also complete "optional" tests - this provides additional information to support teacher assessment and also gives the children experience of "proper tests". Year 2 children and Year 6 children complete more formal National Curriculum tests at the end of the year. How does reading at home help my child? We encourage all children to read regularly at home. Regular time spent reading to an adult plays an invaluable role in helping children to become fluent, confident readers. Being able to read well and understand the text is the most important core skill a child can acquire, since it underpins all learning at school. From the earliest age, children gain great enjoyment out of sitting with mum or dad, granny or granddad, looking at the pictures as a well-known story is shared… often over and over again! Older children, who are more fluent in reading, can develop their appreciation of authors and texts by talking about the book character, plot or underlying message. Please do talk to your class teacher, if you would like any further information on the phonic or reading strategies used in school, or if you would like further information on how you can help your child develop these crucial skills.
http://www.puriton.somerset.sch.uk/Understanding-reading-levels.aspx
4.09375
|Symmetry group||Ci, [2+,2+], (×), order 2| In geometry, a parallelepiped is a three-dimensional figure formed by six parallelograms (the term rhomboid is also sometimes used with this meaning). By analogy, it relates to a parallelogram just as a cube relates to a square or as a cuboid to a rectangle. In Euclidean geometry, its definition encompasses all four concepts (i.e., parallelepiped, parallelogram, cube, and square). In this context of affine geometry, in which angles are not differentiated, its definition admits only parallelograms and parallelepipeds. Three equivalent definitions of parallelepiped are - a polyhedron with six faces (hexahedron), each of which is a parallelogram, - a hexahedron with three pairs of parallel faces, and - a prism of which the base is a parallelogram. Parallelepipeds are a subclass of the prismatoids. Any of the three pairs of parallel faces can be viewed as the base planes of the prism. A parallelepiped has three sets of four parallel edges; the edges within each set are of equal length. Since each face has point symmetry, a parallelepiped is a zonohedron. Also the whole parallelepiped has point symmetry Ci (see also triclinic). Each face is, seen from the outside, the mirror image of the opposite face. The faces are in general chiral, but the parallelepiped is not. The volume of a parallelepiped is the product of the area of its base A and its height h. The base is any of the six faces of the parallelepiped. The height is the perpendicular distance between the base and the opposite face. An alternative method defines the vectors a = (a1, a2, a3), b = (b1, b2, b3) and c = (c1, c2, c3) to represent three edges that meet at one vertex. The volume of the parallelepiped then equals the absolute value of the scalar triple product a · (b × c): This is true because, if we choose b and c to represent the edges of the base, the area of the base is, by definition of the cross product (see geometric meaning of cross product), where θ is the angle between b and c, and the height is where α is the internal angle between a and h. From the figure, we can deduce that the magnitude of α is limited to 0° ≤ α < 90°. On the contrary, the vector b × c may form with a an internal angle β larger than 90° (0° ≤ β ≤ 180°). Namely, since b × c is parallel to h, the value of β is either β = α or β = 180° − α. So We conclude that The latter expression is also equivalent to the absolute value of the determinant of a three dimensional matrix built using a, b and c as rows (or columns): This is found using Cramer's Rule on three reduced two dimensional matrices found from the original. If a, b, and c are the parallelepiped edge lengths, and α, β, and γ are the internal angles between the edges, the volume is For parallelepipeds with a symmetry plane there are two cases: - it has four rectangular faces - it has two rhombic faces, while of the other faces, two adjacent ones are equal and the other two also (the two pairs are each other's mirror image). See also monoclinic. A perfect parallelepiped is a parallelepiped with integer-length edges, face diagonals, and space diagonals. In 2009, dozens of perfect parallelepipeds were shown to exist, answering an open question of Richard Guy. One example has edges 271, 106, and 103, minor face diagonals 101, 266, and 255, major face diagonals 183, 312, and 323, and space diagonals 374, 300, 278, and 272. Some perfect parallelepipeds having two rectangular faces are known. But it is not known whether there exist any with all faces rectangular; such a case would be called a perfect cuboid. Coxeter called the generalization of a parallelepiped in higher dimensions a parallelotope. Specifically in n-dimensional space it is called n-dimensional parallelotope, or simply n-parallelotope. Thus a parallelogram is a 2-parallelotope and a parallelepiped is a 3-parallelotope. More generally a parallelotope, or voronoi parallelotope, has parallel and congruent opposite facets. So a 2-parallelotope is a parallelogon which can also include certain hexagons, and a 3-parallelotope is a parallelohedron, including 5 types of polyhedra. The diagonals of an n-parallelotope intersect at one point and are bisected by this point. Inversion in this point leaves the n-parallelotope unchanged. See also fixed points of isometry groups in Euclidean space. The edges radiating from one vertex of a k-parallelotope form a k-frame of the vector space, and the parallelotope can be recovered from these vectors, by taking linear combinations of the vectors, with weights between 0 and 1. The word appears as parallelipipedon in Sir Henry Billingsley's translation of Euclid's Elements, dated 1570. In the 1644 edition of his Cursus mathematicus, Pierre Hérigone used the spelling parallelepipedum. The Oxford English Dictionary cites the present-day parallelepiped as first appearing in Walter Charleton's Chorea gigantum (1663). Charles Hutton's Dictionary (1795) shows parallelopiped and parallelopipedon, showing the influence of the combining form parallelo-, as if the second element were pipedon rather than epipedon. Noah Webster (1806) includes the spelling parallelopiped. The 1989 edition of the Oxford English Dictionary describes parallelopiped (and parallelipiped) explicitly as incorrect forms, but these are listed without comment in the 2004 edition, and only pronunciations with the emphasis on the fifth syllable pi (/paɪ/) are given. A change away from the traditional pronunciation has hidden the different partition suggested by the Greek roots, with epi- ("on") and pedon ("ground") combining to give epiped, a flat "plane". Thus the faces of a parallelepiped are planar, with opposite faces being parallel. - Oxford English Dictionary 1904; Webster's Second International 1947 - Sawyer, Jorge F.; Reiter, Clifford A. (2011). "Perfect parallelepipeds exist". Mathematics of Computation 80: 1037–1040. arXiv:0907.0220. doi:10.1090/s0025-5718-2010-02400-7.. - Properties of parallelotopes equivalent to Voronoi’s conjecture - Coxeter, H. S. M. Regular Polytopes, 3rd ed. New York: Dover, p. 122, 1973. (He defines parallelotope as a generalization of a parallelogram and parallelepiped in n-dimensions.) |Look up parallelepiped in Wiktionary, the free dictionary.| - Weisstein, Eric W., "Parallelepiped", MathWorld. - Weisstein, Eric W., "Parallelotope", MathWorld. - Paper model parallelepiped (net)
https://en.wikipedia.org/wiki/Parallelepiped
4.0625
- 10 - Not be forcibly removed from their lands or territories - 21.1 - The right to the improvement of economic and social conditions - 23 - The right to determine and develop priorities and strategies - 26 - The right to the lands, territories and resources - 27 - Open and transparent process to recognize and adjudicate the rights of indigenous peoples - 28 - The right to redress for lands, territories and resources taken or damaged - 32 - Free and informed consent prior to the approval of projects affecting lands or territories and other resources For details and background, see: - Declaration on the Rights of Indigenous Peoples (Wikipedia); - The full text of the Declaration (pdf); - The Indigenous peoples main page provided by OHCHR. - The articles included here have been selected on the basis of the International Standards page, provided by OHCHR. What are the stated objectives and aims of the agreement. States where indigenous peoples live. The indigenous peoples. Values & Claims Indigenous Peoples are equal to all other peoples, and their rights should be recognized accordingly. Claims on land and territories(Entity Dictionary) is one area where the rights of indigenous peoples are often neglected.
http://www.actor-atlas.info/treaty:un-declaration-on-the-rights-of-indigenous-peoples
4
Most middle school students must learn to use a triple beam balance scale at some point during their science classes. Often used by physics or chemistry teachers to demonstrate the principle of mass, these devices can be used to weigh any object within their weight limitations. Triple beam balance scales function by balancing an object with three counterweights—attached to the scale—to accurately find the object's weight. Using one of these devices is not difficult. Items you will need - Object to weigh Calibrate the scale by sliding all three weight poises (the metal brackets that slide along the three beams) to their leftmost positions. Twist the zeroing screw (usually located below the pan in which you place the object to be weighed) until the balance pointer lines up with the fixed zero mark. Place the object to be weighed on the center of the pan. Slide the 100-gram poise right one notch at a time. When the indicator drops below the fixed mark, move the poise left one notch. For instance, if your object weighs 487 grams, the 100-gram indicator would drop below the fixed mark on the fifth notch (500 grams). Move the poise back to the 400-gram notch. Slide the 10-gram poise right one notch at a time. When the indicator drops below the fixed mark, move the poise left one notch. In the case above, the 10-gram indicator would drop below the fixed mark on the ninth notch (90 grams). Move the poise back to the 80-gram notch. Slide the 1-gram poise slowly across the third beam. There are no notches, so keep an eye on the pointer as you slide. Stop sliding when the pointer lines up with the fixed mark. In the case above, the 1-gram poise will cause the pointer to line up at the fixed mark at 7 grams. Add the values of all three beams to determine the mass of your object. In the case of our example, add 400 + 80 + 7, resulting in an object mass of 487 grams. Style Your World With Color Let your imagination run wild with these easy-to-pair colors.View Article Explore a range of beautiful hues with the year’s must-have colors.View Article Let your clothes speak for themselves with this powerhouse hue.View Article Barack Obama's signature color may bring presidential power to your wardrobe.View Article - Repeat your measurements twice to be sure of your results. This is especially important in science labs, where operator error can skew the results of an experiment. - Failing to zero out the scale before using the triple beam balance can result in inaccurate measurements.
http://classroom.synonym.com/use-triple-beam-balance-scale-2503.html
4.0625
Posted on: Aug 18, 2005 It turns out you can’t judge an asteroid by its cover, according to a recent study in the journal Nature. Or at least you can’t accurately date a certain asteroid called 433 Eros by counting the impact craters on its surface -- the traditional method for determining an asteroid’s age. Peter Thomas, a senior research associate in astronomy at Cornell University and lead author on the paper, and Mark Robinson, research associate professor of geological sciences at Northwestern University, analyzed images of Eros gathered four years ago by the Near Earth Asteroid Rendezvous mission. The mission mapped the 20-mile-long, potato-shaped asteroid and its thousands of craters in detail. The two researchers focused on a large impact crater, known as the Shoemaker crater, and a few unusual crater-free areas. In the Nature article, Thomas and Robinson show that the asteroid’s smooth patches can be explained by a seismic disturbance that occurred when a meteoroid crashed into Eros, shaking the asteroid and creating Shoemaker crater. The shaking caused loose surface material to fill some small craters, essentially erasing craters from approximately 40 percent of Eros’ surface and making the asteroid appear younger than its actual age. The fact that seismic waves were carried through the center of the asteroid after the impact shows that the asteroid’s interior is cohesive enough to transmit such waves, say the authors. And the smoothing-out effect within a radius of up to 5.6 miles from the 4.7-mile Shoemaker crater -- even on the opposite side of the asteroid -- indicates that Eros’ surface is loose enough to get shaken down by the impact. Asteroids are small, planet-like bodies that date back to the beginning of the solar system, so studying them can give astronomers insight into the solar system’s formation. And while no asteroids currently threaten Earth, knowing more about their composition could help prepare for a possible future encounter. Eros is the most carefully studied asteroid, in part because its orbit brings it close to earth. Thomas and Robinson considered various theories for the regions of smoothness, including the idea that ejecta from another impact had blanketed the areas. But they rejected the ejecta hypothesis when calculations showed an impact Shoemaker’s size wouldn’t create enough material to cover the surface indicated. And even if it did, they add, the asteroid’s irregular shape and motion would cause the ejecta to be distributed differently. In contrast, the shaking-down hypothesis fits the evidence neatly. 'Science knows no country, because knowledge belongs to humanity, and is the torch which illuminates the world. '
http://www.physlink.com/news/081805DatingAnAsteroid.cfm
4.25
Blood pressure: what is your target? How is blood pressure measured? Blood pressure is measured using an instrument called a sphygmomanometer. It consists of an inflatable cuff, an inflating bulb, and a gauge to show the blood pressure. The cuff is wrapped around the upper arm, and inflated to a pressure where the pulse in the arm can no longer be heard or felt. The cuff pressure is then raised slightly beyond this point, and then slowly lowered in order to get a reading of the systolic and diastolic blood pressure. The systolic reading (the first number of the 2) indicates the pressure of blood within your arteries during a contraction of the left ventricle of the heart. The diastolic reading (the second number) indicates the pressure within the arteries when the heart is at rest. Blood pressure is measured in millimetres of mercury (mmHg), for example 120/80 mmHg (known as 120 over 80). What is normal blood pressure? According to the Heart Foundation of Australia, as a general guide: - blood pressure just below 120/80 mmHg can be classified as 'normal'; and - blood pressure between 120/80 and 140/90 mmHg is classified as 'high-normal'. A person is defined by the Heart Foundation as having high blood pressure (hypertension) if they: - have a systolic pressure greater than or equal to 140 mmHg; and/or - a diastolic pressure greater than or equal to 90 mmHg. Hypertension is further classified as mild, moderate or severe as the pressure increases above this level. Low blood pressure, or hypotension, is not as easy to define as it is usually relative to a person’s normal blood pressure reading, and varies between different people. It generally refers to a blood pressure below an average of about 90/60 mmHg. Getting an accurate reading According to the Heart Foundation, the diagnosis of high blood pressure should be based on multiple blood pressure measurements taken on separate occasions. It is recommended that you do not smoke or drink caffeine-containing drinks for 2 hours before having your blood pressure monitored, as this can cause an increase in your readings. Self-monitoring of blood pressure in your own environment or ambulatory monitoring of blood pressure is also used to help diagnose high blood pressure. For ambulatory blood pressure monitoring, you wear a portable automatic blood pressure machine for 24 hours while going about your usual daily routine. Variations in blood pressure are normal and may occur depending on where and when the blood pressure is taken. Some people who have raised blood pressure readings taken at the doctor’s surgery actually have acceptable levels outside the surgery, when under normal stress levels. This is known as ‘white-coat’ hypertension. There are also people with ‘reverse white-coat’ hypertension (also known as masked hypertension), who have normal blood pressure when measured in the clinic but high ambulatory blood pressure readings (those recorded during normal daily activities). Keeping on target Your target blood pressure may vary according to whether you have other conditions that can increase your risk of cardiovascular (heart and blood vessel) disease or conditions that have been caused by high blood pressure. Raised blood pressure is a major risk factor for cardiovascular disease, and the higher your blood pressure, the greater your chance of having heart disease or stroke. For this reason it is important that you have your blood pressure monitored regularly, and that you always take any medicine prescribed for hypertension. Hypertension can also be controlled to a large extent by lifestyle modifications such as reducing excess weight, undertaking regular physical activity, and giving up smoking. Dietary interventions such as reducing your alcohol and salt intake and following a healthy eating plan may also help to lower your blood pressure and reduce your absolute risk of cardiovascular disease. - 1. Heart Foundation. Guide to management of hypertension 2008. Assessing and managing raised blood pressure in adults; updated December 2010. http://www.heartfoundation.org.au/SiteCollectionDocuments/HypertensionGuidelines2008to2010Update.pdf (accessed Jan 2015). 2. Hypertension (revised October 2012). In: eTG complete. Melbourne: Therapeutic Guidelines Limited; 2014 Nov. http://online.tg.org.au/complete/ (accessed Jan 2015). 3. National Vascular Disease Prevention Alliance. Guidelines for the management of absolute cardiovascular disease risk; 2012. http://strokefoundation.com.au/site/media/AbsoluteCVD_GL_webready.pdf (accessed Jan 2015). 4. MayoClinic.com. Low blood pressure (hypotension) (updated 2 May 2014). http://www.mayoclinic.com/health/low-blood-pressure/DS00590 (accessed Jan 2015).
http://www.mydr.com.au/heart-stroke/blood-pressure-what-is-your-target
4.09375
|This article needs additional citations for verification. (October 2011)| Roundhouse is a term applied by archaeologists and anthropologists to a type of house with a circular plan, usually with a conical roof. In the later part of the 20th century modern designs of roundhouse eco-buildings started to be built[where?] using techniques such as cob, cordwood or straw bale walls and reciprocal frame green roofs. Roundhouses were the standard form of housing built in Britain from the Bronze Age throughout the Iron Age, and in some areas well into the Sub Roman period. They used walls made either of stone or of wooden posts joined by wattle-and-daub panels and a conical thatched roof and ranged in size from less than 5m in diameter to over 15m. The Atlantic roundhouse, Broch and Wheelhouse styles were used in Scotland. The remains of many Bronze Age roundhouses can still be found scattered across open heathland, such as Dartmoor, as stone 'hut circles'. Most of what is assumed about these structures is derived from the layout of the postholes, although a few timbers have been found preserved in bogs. The rest has been postulated by experimental archaeology, which has shown the most likely form and function of the buildings. For example, experiments have shown that a conical roof with a pitch of about 45 degrees would have been the strongest and most efficient design. Peter J. Reynolds also demonstrated that, although a central fire would have been lit inside for heating and cooking, there could not have been a smoke hole in the apex of the roof, for this would have caused an updraft that would have rapidly set fire to the thatch. Instead, smoke would have accumulated harmlessly inside the roof space, and slowly leaked out through the thatch. Many modern simulations of roundhouses have been built, including: |Bodrifty Iron Age Settlement||Cornwall||England| |Brigantium Archeological Centre||High Rochester||Northumberland||England| |Butser Ancient Farm||Hampshire||England| |Cockley Cley,||near Swaffham||Norfolk||England| |Flag Fen||near Peterborough||England| |Mellor roundhouse reconstruction||Greater Manchester||England| |Peat Moors Centre||Somerset||England||Closed to the public 31 October 2009| |Raincliffe Woods||Scarborough||North Yorkshire||England||Roof destroyed by fire April 2013. Timbers and thatch removed by Scarborough Conservation Volunteers. Walls undamaged.| |Ryedale Folk Museum||near Pickering||North Yorkshire||England| |St Fagans National History Museum||South Glamorgan||Wales| |Scottish Crannog Centre||Loch Tay||Perthshire||Scotland||Roundhouse reconstruction on a man made island| |Tatton Iron Age roundhouse and pit||Cheshire||England| Modern British roundhouses New designs of roundhouse are again being built in Britain and elsewhere. In the UK straw bale construction or cordwood walls with reciprocal frame green roofs are used. There is a manufacturer of contemporary Roundhouses in Cheshire, England, using modern materials and engineering to bring the circular floorplan back for modern living. That Roundhouse is an early example of a modern roundhouse dwelling which was built in Pembrokeshire Coast National Park, Wales without planning permission as part of the Brithdir Mawr village which was discovered by the authorities in 1998. It is constructed from a wooden frame of hand-cut Douglas Fir forest thinnings with cordwood infill, and reciprocal frame turf roof based on permaculture principles mainly from local natural resources. It was subject to a lengthy planning battle including a court injunction to force its demolition before finally receiving planning approval for 3 years in September 2008. Trulli (singular: trullo) are houses with conical roofs, and sometimes circular walls, found in parts of the southern Italian region of Apulia. Galicia – Asturias A palloza is a traditional thatched house as found in the Serra dos Ancares in Galicia, Spain, and in the south-west of Asturias. It is circular or oval, and about ten or twenty metres in diameter and is built to withstand severe winter weather at a typical altitude of 1,200 metres. The main structure is stone, and is divided internally into separate areas for the family and their animals, with separate entrances. The roof is conical, made from rye straw on a wooden frame. There is no chimney, the smoke from the kitchen fire seeps out through the thatch. As well as living space for humans and animals, a palloza has its own bread oven, workshops for wood, metal and leather work, and a loom. Only the eldest couple of an extended family had their own bedroom, which they shared with the youngest children. The rest of the family slept in the hay loft, in the roof space. - See also Castros in Spain Raun Haus, Papua New Guinea - Aston, Mick (2001-10-05). "Peter Reynolds: archaeologist who showed us what the Iron Age was really like (obituary)". The Guardian (London). - "Secret village to be pulled down". BBC News. 1998-10-23. Retrieved 2009-04-12. - Barkham, Patrick (2009-04-12). "Round the houses". The Guardian (London). Retrieved 2009-04-12. - Eric Rosenthal (1961–1978). Encyclopaedia of Southern Africa. London: F. Warne. p. 35. ISBN 0-7232-1487-5. - "Cob roundhouse". - "Raun Haus / Round Haus". |Wikimedia Commons has media related to Round houses.| - Animation showing how an ancient British roundhouse may have been constructed - Characterising the Welsh Roundhouse: chronology, inhabitation and landscape - A 21st century iron age round house - Some examples of Reconstructed Celtic Roundhouses
https://en.wikipedia.org/wiki/Roundhouse_(dwelling)
4
Irish Volunteers (18th century) The Volunteers (also known as the Irish Volunteers) were local militias raised by local initiative in Ireland in 1778. Their original purpose was to guard against invasion and to preserve law and order at a time when British soldiers were withdrawn from Ireland to fight abroad during the American Revolutionary War and the government failed to organise its own militia. Taking advantage of Britain's preoccupation with its rebelling American colonies, the Volunteers were able to pressure Westminster into conceding legislative independence to the Dublin parliament. Members of the Belfast 1st Volunteer Company laid the foundations for the establishment of the United Irishmen organisation. The majority of Volunteer members however were inclined towards the yeomanry, which fought and helped defeat the United Irishmen in the Irish rebellion of 1798. - 1 Founding - 2 Politics - 3 Dungannon Conventions - 4 Motifs and mottos - 5 Competitions and awards - 6 Organisation - 7 Catholic emancipation - 8 Demise - 9 Legacy - 10 References - 11 Sources - 12 See also As far back as 1715 and 1745, self-constituted bodies of defensive local forces where formed in anticipation of Stuart invasions. For example, in 1744 with the declaration of war with France and in 1745 the landing of Prince Charles Edward in Scotland, a corps of 100 men was enrolled in Cork, known as the "The True Blues", which formed one of the regiments of the "United Independent Volunteers". In 1757 and 1760 there were volunteer units formed due to the Seven Years' War and due to the French landing at Carrickfergus in 1760. The roll-call of the militia that marched on the French at Carrickfergus listed in the "Collectanea politica", published in 1803, was titled "Ulster volunteers in 1760". From 1766 onwards units were embodied by local landlords in various parts of the country for the preservation of peace and the protection of property. Early volunteer groups (which later became part of the Volunteers) included: First Volunteers of Ireland (1 July 1766); Kilkenny Rangers (2 June 1770); First Magherafelt Volunteers (June 1773); and the Offerlane Blues (10 October 1773). The rise of the Volunteers was a spontaneous event fired by patriotism and the threat of invasion, as another French landing was anticipated when war broke out in 1778. With British troops being dispatched from Ireland for the war with the American colonies, the landed gentry reacted nervously, and misunderstandings[further explanation needed] arose about Ireland's defence capabilities. Claims that Ireland was ill-prepared for an attack, along with alleged negligence from Dublin Castle, was used to justify the existence of Volunteer companies and their role in defending Ireland. In fact around 4,000 soldiers had been dispatched to the American colonies, leaving as many as 9,000 behind in Ireland. The Volunteers were built upon existing foundations. Dublin Castle had created militias throughout the 18th century, however these had fallen into disuse. The Volunteers filled the gap left behind, with possibly half of its officers having held commissions in the militia. Historian Thomas Bartlett claims that the purpose of the militia as defined in 1715 would have fitted with the aims of the Volunteers: "of suppressing... all such insurrections and rebellions, and repelling of invasions". Along with this, Irish Protestants of all ranks had a long, strong tradition of self-defence, having formed groups to resist and pursue agrarian insurgents and keeping a watchful eye on Catholics when threats arose. The Volunteers were independent of the Irish Parliament and Dublin Castle, and this was an established fact by 1779. It is claimed that had the Lord Lieutenant of Ireland, John Hobart, 2nd Earl of Buckinghamshire, been more pro-active and assertive, then the Volunteers could have come under some form of government control. The regular military deemed the Volunteers of low value in regards to helping repulse a foreign threat. Instead they held the view that they could be a "serviceable riot police", and it was this that they distinguished themselves for. For example, Volunteer companies did duty whilst regular troops had been called away, whilst others were used to pursue agrarian insurgents. When protests were organised in Dublin following the introduction of a bill in the Irish Parliament seeking to outlaw textile workers' combinations, the Volunteers were mobilised to maintain the peace in case of public disorder. The British victory over the Spanish off Cape St. Vincent in 1780 saw the fear of invasion dissipate, causing the Volunteers to also become involved in politics. Initially they started off agitating for reforms and measures to promote Ireland's prosperity, but later they moved from peaceful persuasion to "the threat of armed dictatorship". In the end Parliament was victorious. The Volunteers however were also marked by liberal political views. For instance although only Anglican Protestants were allowed to bear arms under the Penal Laws, the Volunteers admitted Presbyterians and a limited number of Catholics, reflecting the recent Catholic Relief Act of 1778. The Volunteers additionally provided a patriotic outlet, with each corps becoming a debating society. This brought about a shift in power with the Volunteers being controlled by progressive politically minded people and not by the Establishment. The Volunteers also saw the annual Protestant commemorations such as the Battle of the Boyne and the Battle of Aughrim become displays of patriotic sentiment. In Dublin on 4 November 1779, the Volunteers took advantage of the annual commemoration of King William III's birthday, marching to his statue in College Green and demonstrating for Free Trade between Ireland and England. Previously, under the Navigation Acts, Irish goods had been subject to tariffs upon entering England, whereas English goods could pass freely into Ireland. The Volunteers paraded fully armed with the slogan, "Free Trade or this", as referring to cannon. also cited "Free trade or a Speedy Revolution". According to Liz Curtis the English regime in Ireland was vulnerable, and the Volunteers used this to press for concessions from England using their new-found strength. This demand of the Volunteers was quickly granted by the British government. The Dublin Volunteers' review, saluting a statue of King William III, in College Green on 4 November 1779 was painted by Francis Wheatley. On 4 June 1782, the Belfast Troop of Light Dragoons volunteer company and the Belfast Volunteer Company paraded through Belfast in honour of the King's birthday. After firing three volleys, they marched to Cave Hill where they were joined by the Belfast Artillery Company, who upon their arrival fired a "royal salute of twenty-one guns". Nine years later on 14 July 1792 in contrast to this in a sign of changing opinions, on the second anniversary of the fall of the Bastille, the Belfast Volunteers exuberantly paraded through Belfast and agreed to send a declaration to the national assembly of France, to which they received "rapturous replies". On 28 December 1781, members of the Southern Battalion of the County Armagh Volunteers (who formed the First Ulster Regiment) convened and resolved for a meeting in the "most central town of Ulster, which we conceive to be Dungannon", in which delegates from every volunteer association in the province of Ulster were requested to attend. The date of this meeting was pencilled in for "the 15th day of February next, at ten o'clock in the forenoon." On the arranged date, 15 February 1782, delegates from 147 Volunteer corps arrived at the Presbyterian church, at Scotch Street, Dungannon for what would become known as the "Dungannon Convention of 1782". This church has formerly been the favourite meeting place of the Presbyterian Synod of Ulster and later the supreme ecclesiastical court of Irish Presbyterians. After the Volunteer convention it was afterwards known as the "Church of the Volunteers". This church was used for the next three conventions of Ulster Volunteer corps: 21 June 1782, with delegates from 306 companies attending; 8 September 1783, with delegates from 270 companies; and almost a decade later on 15 February 1793, when the "fires of patriotism that marked the birth of the movement were burning low", which the meeting "failed to kindle them anew". The first meeting is the best known. Many of the Volunteers were just as concerned with securing Irish free trade and opposing English governmental interference in Ireland as they were in repelling the French. This resulted in them pledging support for resolutions advocating legislative independence for Ireland, whilst proclaiming their loyalty to the British Crown. The first convention according to Sir Jonah Barrington, saw 200 delegates marching two by two into the church "steady, silent, and determined", clothed in their uniform and bearing arms. A poem by Thomas Davis states how "the church was full to the door". The lower part of the church was reserved for delegates with the gallery for their friends who required tickets for admission. Some people who attended the first and second conventions however consider them to be equally important. After pressure from the Volunteers and a Parliamentary grouping under Henry Grattan, greater autonomy and powers (legislative independence) were granted to the Irish Parliament, in what some called "the constitution of 1782". This resulted in the Volunteers at the third convention proceeding to demand parliamentary reform, however as the American War of Independence was ending, the British government no longer feared the threat of the Volunteers. The fourth convention in 1793 was held after a period of steep decline in Volunteer membership (see Demise below). This was partly the result of sharp division of opinion amongst Volunteers on political matters, so much so that the County Armagh companies refused to send any delegates to the fourth convention. The bowl that was used as the pledging-cup of the Volunteers at the first convention was rediscovered in the 1930s in County Tyrone. This bowl was tub-shaped, resembling an Irish mether, and had the original owner's (John Bell) crest and initials engraved on the inside, as well as on the wooden base of it. Decorating this pledging-cup was three silver hoops bearing nine toasts, each of which was numbered as follows: 1. The King, 2. The Queen, 3. The Royal Family, 4. The Memory of St. Patrick, 5. The Sons of St. Patrick, 6. The Daughters of St. Patrick, 7. The Irish Volunteers, 8. The Friends of Ireland, 9. A Free Trade. An obelisk commemorating the Dungannon Convention of 1782, was erected that year by Sir Capel Molyneux, on a hill a few miles north-east of Armagh city. On it is the following inscription: "This obelisk was erected by the Right Hon. Sir Capel Molyneux, of Castle Dillon, Bart., in the year 1782, to commemorate the glorious revolution which took place in favour of the constitution of the kingdom, under the auspices of the Volunteers of Ireland." Motifs and mottos The primary motif of the Volunteers was an Irish harp with the British crown mounted above it, with either the name of the company or a motto curved around it, or both, i.e. "Templepatrick Infantry" or "Liberty & Our Country". This harp and crown motif was prevalent on the Volunteer companies flags, belt-plates and gorgets. Some included the Royal cypher "G.R." standing for King George III. Shamrocks also commonly featured. Other mottos included amongst variations: For Our King & Country, Pro Rege et Patria (for King and Country), Quis Separabit (none shall separate), and Pro Patria (for Country) Another Volunteer motto is the oft-repeated Pro Aeris et Focis (for our altars and our hearths), a truncated form of Pro Caesare, Pro Aeris et Focis (for our King, out altars, and out hearths), which was also used. Competitions and awards Competitions where held between Volunteer corps, with medals given out as marks of distinction for the best marksmen, swordsmen, as well as for the most efficient soldiers. The members of Volunteer corps from the province of Ulster, more specifically from the counties of Antrim, Armagh, Down, Londonderry, and Tyrone featured quite prominently and took an honourable place. Examples of marksmen competitions included best shot with ball and best target shot at 100 yards. Rewards of merit were also given. Originally each Volunteer company was an independent force typically consisting of 60 to 80 men In some parts of the country, a company could consist of between 60 and 100, and were raised in each parish where the number of Protestants made it viable. Alongside the parish companies, towns had one or more companies. For officers a company had as its highest rank, a captain, followed by a lieutenant, and ensign. They also had surgeons and chaplains. Local Volunteer companies would later amalgamate into battalions led by colonels and generals, some of which consisted of ten to twelve companies. An example of the amalgamation of Volunteer companies is that of First Ulster Regiment, County Armagh. The First Armagh Company was raised in Armagh city on 1 December 1778, and on 13 January 1779, Lord Charlemont became its captain. As many new Volunteer corps were being raised throughout the county, a meeting was held at Clare on 27 December 1779, where they discussed forming these corps into battalions, with commanding officers appointed and the raising of artillery companies to compliment them. This saw the creation of the Northern Battalion and Southern Battalion of the First Ulster Regiment. Unlike the volunteer militias formed earlier in the 18th century, which had Crown commissioned officers, the private members of Volunteer companies in a form of military democracy appointed their own, and were "subject to no Government control". These officers were subject to being dismissed for misconduct or incapacity. An example of Volunteers taking action against their own officers would be two officers commissioned to the Southern Battalion of the First Ulster Regiment: Thomas Dawson (commander) and Francis Dobbs (major). Both would also accept commissions in a Fencible regiment. This met with great disapproval amongst local volunteer companies who found them no longer acceptable as field officers. Lord Charlemont's own company, the First Armagh Company, even protested against the formation of Fencible regiments. By 1 January 1783, both Dawson and Dobbs had received their Fencible commissions and ceased to be volunteers. Of the 154 companies of Volunteers listed in The Volunteer's Companion (1794); 114 had scarlet uniforms, 18 blue, 6 green, 1 dark green, 1 white, 1 grey, 1 buff, and 12 undetailed. The details of the uniform of each corps varied depending on their choice of colouring for the facing on their uniforms, and for some the lace and buttons, amongst other pieces, for example: the Glin Royal Artillery's uniform was "Blue, faced blue; scarlet cuffs and capes; gold lace", whilst the Offerlane Blues' uniform was "Scarlet, faced blue; silver lace". The Aghavoe Loyals had "scarlet, faced blue", whilst the Castledurrow Volunteers wore green uniforms faced with white and silver lining. Lord Charlemont desired that all county companies should have the same uniform of scarlet coats with white facings, however some companies had already chosen their colours, or where is existence before his involvement. Whilst information on clothing is scant, it has been suggested that most uniforms were made locally, with badges, buttons, cloth, and hats being procured from places like Belfast and Dublin. The Belfast News Letter carried advertisements from merchants offering: plated and gilt Volunteer buttons, furnished belt and pouch plates, engravings, regimental uniform cloth, and even tents. The painting of Volunteer drums and colours was also offered. The naming of some Volunteer companies may show a continuation of earlier Protestant anti-Catholic traditions, with corps named after "Protestant" victories such as the Boyne, Aughrim and Enniskillen. Another "Protestant" victory, Culloden, the final battle of the Jacobite Rising of 1745, which saw the defeat of the Young Pretender, was used by the Culloden Volunteers of Cork company. Reviews of Volunteer corps were held since the earliest days of volunteering, with county companies travelling long distances to attend ones like the Belfast Reviews. Some reviews such as those in County Armagh originally were on a smaller scale, and consisted of a few companies assembling and performing field exercises in a particular district. They later became larger affairs with brigades consisting of battalions of companies. The order of the day has been recorded for the Newry Review of 1785: most of the attending companies had marched to Newry on the Thursday, the day which Lord Charlemont also arrived. On Friday the companies that formed the First Brigade assembled and marched to the review ground, where Lord Charlemont would inspect them. His arrival was announced by the firing of nine cannons. On the Saturday, the same thing happened again this time for the Second Brigade. The review also demonstrated the attack and defence of Newry. As the period of the Volunteers drew to an end, some such as those from the County Armagh Volunteers, started considering the larger reviews as a waste of time and energy. One Volunteer, Thomas Prentice, voiced a common opinion to Lord Charlemont that they would rather instead have a few companies meet a few times during the summer for drilling and improvement. In March 1793 the assembly of armed associations was prohibited, making it illegal to hold a review. The last planned review was for one near Doagh on 14 September 1793 in County Antrim. Ammunition for it had been dispatched in secret a few days prior to companies with serviceable arms so that they can resist any opposition they encountered. An hour before the review was to be held, news spread that the 38th Regiment, the Fermanagh Militia, and detachment of Artillery had arrived in Doagh, resulting in the review being abandoned with no date for resumption. The Volunteers had no unified view in regards to Catholic emancipation, and their attitude towards Catholics were not uniformly hostile. The threat posed by Catholics was deemed to be near non-existent, and that local Volunteers were "under no apprehensions from the Papists". The Volunteers exerted considerable pressure on the British government to ease the Penal Laws on Catholics such as the Relief Acts of 1778 and 1782. The passing of the Relief Act of 1778, resulted in the Catholic hierarchy giving their support to the British in the American War of Independence, even so far as to having fasts for the success of British arms. The war also offered a chance for Catholics to show their loyalty. As early as June 1779 this perceived lack of threat from Catholics, allowed them to be able to enlist into some Volunteer companies, and in counties Wexford and Waterford, tried to set up their own. The Catholic hierarchy however were "resolutely suspicious" of the Volunteers, even though generally Catholics "cheered on the Volunteers". At the Dungannon Convention of 1782, a resolution was passed that proclaimed the rejoice at the relaxation of the Penal Laws, whilst saying that Catholics "should not be completely free from restrictions". In contrast at Ballybay, County Monaghan, the Reverend John Rodgers addressed a meeting of Volunteers, imploring them "not to consent to the repeal of the penal laws, or to allow of a legal toleration of the Popish religion". John Wesley wrote in his Journal that the Volunteers should "at least keep the Papists in order", whilst his letter to the Freeman's Journal in 1780, which many would have agreed with, argued that he would not have the Catholics persecuted at all, but rather hindered from being able to cause harm. County Armagh disturbances In the 1780s sectarian tensions rose to dangerous levels in County Armagh, culminating in sectarian warfare between the Protestant Peep o' Day Boys and the Catholic Defenders that raged for over a decade. Many local Volunteers, holding partisan views, became involved in the conflict. In November 1788, the Benburb Volunteers were taunted by a "Catholic mob" near Blackwaterstown. The Benburb Volunteers then opened fire upon the Catholics killing two, and mortally wounding three others. In July 1789, the Volunteers assaulted the Defenders who had assembled at Lisnaglade Fort near Tandragee, resulting in more lives being lost. In 1797 Dr. William Richardson wrote a detailed analysis for the 1st Marquess of Abercorn, where he claimed that the troubles were caused by the excitement of volunteering during the American Revolutionary War, which gave "the people high confidence in their own strength". Belfast 1st Volunteer Company Outside of Ulster, Catholics found few supporters as Protestants were a minority concerned with their privileges. In Ulster, Protestants and Catholics where almost equal in number and sectarian rivalries remained strong, exemplfied by the County Armagh disturbances. In contrast, east of the River Bann in counties Antrim and Down, the Protestants were such an "overwhelming majority" that they had little to fear from Catholics, and became their biggest defenders. According to the The Volunteers Companion, printed in 1784, there were five different Volunteer companies in Belfast, the first of which was the Belfast 1st Volunteer Company formed on 17 March 1778. Delegates from this company to the national convention of 1782 were "bitterly disappointed" that their fellow Volunteers were still opposed to giving Catholics the vote. In 1783 they became the first company of Volunteers in Ireland to "defiantly" admit Catholics into their ranks, and in May 1784 attended mass at St. Mary's chapel. Indeed, the building of this chapel was largely paid for by the Belfast 1st Volunteer Company. In sharp contrast to this, no Roman Catholic was ever admitted into a County Armagh company. In 1791, the Belfast 1st Volunteer Company passed its own resolution arguing in favour of Catholic emancipation. In October that year the Society of United Irishmen was founded, initially as an offshoot of the Volunteers. In 1792, a new radical company was created as part of the Belfast Regiment of Volunteers, the Green Company, under which guise the United Irishmen held their initial meetings. Wolfe Tone, a leading member of the United Irishmen, was elected to be an honorary member of the Green Company, who he also calls the First Company, hinting that the Belfast 1st Volunteer Company reorganised itself into the Green Company. Eventually the United Irishmen would advocate revolutionary and republican ideals inspired by the French Revolution. Ironically it was only 31 years previous when Belfast had called upon volunteer militias from counties Antrim, Armagh, and Down to defend it from the French. The Volunteers became less influential after the end of the war in America in 1783, and rapidly declined except in Ulster. Whilst volunteering remained of interest in counties Antrim and Down, in other places such as neighbouring County Armagh, interest was in serious decline as was membership. Internal politics too played a role in the Volunteers demise with sharp divisions of opinion regarding political affairs, possibly including "disapproval of the revolutionary and republican sentiments then being so freely expressed", especially amongst northern circles. The ultimate demise of the Volunteers occurred during 1793 with the passing of the Gunpowder Act and Convention Act, both of which "effectively killed off Volunteering", whilst the creation of a militia, followed by the yeomanry, served to deprive the Volunteers of their justification of being a voluntary defence force. Whilst some Volunteer members would join the United Irishmen, the majority were inclined towards the Yeomanry, which was used to help put down the United Irishmen's rebellion in 1798. Some of these United Irishmen and Yeomen had received their military training in the same Volunteer company; for example, the Ballymoney company's Alexander Gamble became a United Irishman, whilst George Hutcinson, a captain in the company, joined the Yeomanry. It was the Volunteers of 1782 that launched a paramilitary tradition in Irish politics, a tradition, whether nationalist or unionist, has continued to shape Irish political activity with the ethos of "the force of argument had been trumped by the argument of force". The Volunteers of the 18th century set a precedent for using the threat of armed force to influence political reform. George Washington, also a member of the landed gentry, had written about them: "Patriots of Ireland, your cause is our own". While their political aims were limited, and their legacy was ambiguous, combining future elements of both Irish nationalism and Irish unionism. The Ulster Volunteers founded in 1912 to oppose Irish Home Rule, made frequent reference to the Irish Volunteers, and attempted to link its activities with theirs. They shared many features such as regional strength, leadership, and a Protestant recruitment base. The Irish Volunteers, formed in November 1913, were in part inspired and modeled on the Ulster Volunteers, but its founders, including Eoin MacNeill and Patrick Pearse, also drew heavily upon the legacy of the 18th-century Volunteers. Renowned Irish historian and writer James Camlin Beckett, stated that when the Act of Union between Great Britain and Ireland was being debated in the Parliament of Ireland throughout 1800, that the "national spirit of 1782 was dead". Despite this, Henry Grattan, who had helped secure the Irish parliament's legislative independence in 1782, bought Wicklow borough at midnight for £1,200, and after dressing in his old Volunteer uniform, arrived at the House of Commons of the Irish parliament at 7 a.m., after which he gave a two-hour speech against the proposed union. Denis McCullough and Bulmer Hobson of the Irish Republican Brotherhood (IRB) established the Dungannon Clubs in 1905..."to celebrate those icons of the constitutionalist movement, the Irish Volunteers of 1782" MacNeill stated of the original Volunteers, "the example of the former Volunteers (of 1782) is not that they did not fight but that they did not maintain their organisation till their objects had been secured". One of the mottos used by the Volunteer's Quis Separabit, meaning "who shall separate us", which was in use by them from at least 1781, is also used by the Order of St. Patrick (founded in 1783), and is used by several Irish British Army regiments such as the Royal Dragoon Guards, Royal Ulster Rifles (previously Royal Irish Rifles), 4th Royal Irish Dragoon Guards, 88th Regiment of Foot (Connaught Rangers) and its successor the Connaught Rangers. It was also adopted by the anti-Home Rule organisation, the Ulster Defence Union, and is also the motto of the paramilitary Ulster Defence Force. - Blackstock, Allan (2001). Issue 2 of Belfast Society publications, ed. Double traitors?: the Belfast Volunteers and Yeomen, 1778–1828. Ulster Historical Foundation. p. 2. ISBN 978-0-9539604-1-5. Retrieved 3/10/09. Check date values in: - Garvin, Tom (1981). The Evolution of Irish Nationalist Politics. Gill and Macmillan Ltd. p. 20. ISBN 0-7171-1312-4. - Curtis, Liz (1994). The Cause of Ireland: From the United Irishmen to Partition. Beyond the Pale Publications. p. 4. ISBN 0-9514229-6-0. - Bardon, Jonathan; A History of Ulster, page 217-220. The Black Staff Press, 2005. ISBN 0-85640-764-X - Ulster Museum, History of Belfast exhibition - Padraig O Snodaigh; THE IRISH VOLUNTEERS 1715–1793 – A list of the units, pg 88. Irish Academic Press, Dublin. - Day, Robert; The Ulster Volunteers of '82: Their Medals, Badges, &c., Ulster Journal of Archaeology, Second Series, Vol. 4, No. 2 (Jan. 1898). - Bigger, Francis Joseph; Ulster Volunteers in 1760, Ulster Journal of Archaeology, Second Series, Vol. 8, No. 4 (Oct. 1902). - Google Books – Collectanea Politica - Bigger, Francis Joseph; The National Volunteers of Ireland, 1782, Ulster Journal of Archaeology, Second Series, Vol. 15, No. 2/3 (May 1909). - Paterson, T. G. F.; The County Armagh Volunteers of 1778–1793, Ulster Journal of Archaeology, Third Series, Vol. 4 (1941) - Thomas Bartlett (2010). Ireland: a History. Cambridge University Press. p. 179-. ISBN 978-1-107-42234-6. - Stewart, A.T.Q. (1998). A Deeper Silence: The Hidden Origins of the United Irishmen. Blackstaff Press. pp. 4–5. ISBN 0-85640-642-2. - Cruise O'Brien, Conor (1994). The great melody: a thematic biography and commented anthology of Edmund Burke. American Politics and Political Economy Series. University of Chicago Press. p. 179. ISBN 978-0-226-61651-3. Retrieved 3/10/09. Check date values in: - Berresford Ellis, Peter (1985). A History of the Irish Working Class. Pluto. pp. 63–64. ISBN 0-7453-0009-X. - F.X. Martin, T.W. Moody (1980). The Course of Irish History. Mercier Press. pp. 232–233. ISBN 1-85635-108-4. - Ian McBride. History and Memory in Modern Ireland. Cambridge University Press. ISBN 0-521-79366-1. - Jonah Barrington's Memoirs; chapter 7 on the Volunteers - Duffy, Sean (2005). A Concise History of Ireland. pp. 132–133. ISBN 0-7171-3810-0. - Paterson, T. G. F.; The County Armagh Volunteers of 1778–1793: List of Companies, Ulster Journal of Archaeology, Third Series, Vol. 6 (1943) Cite error: Invalid <ref>tag; name "Paterson" defined multiple times with different content (see the help page). - Bardon, Jonathan; A History of Ulster, page 214-217. The Black Staff Press, 2005. ISBN 0-85640-764-X - W. T. Latimer; Church of the Volunteers, Dungannon, Ulster Journal of Archaeology, Second Series, Vol. 1, No. 1 (Sep. 1894). - F.X. Martin, T.W. Moody (1994). The Course of Irish History. Mercier Press. p. 233. ISBN 1-85635-108-4. - Duffy, Sean (2005). A Concise History of Ireland. pp. 133–134. ISBN 0-7171-3810-0. Quote: We know our duty to our Sovereign, and are loyal. We know our duty to ourselves, and are resolved to be free. We seek for our rights and no more than our rights - British Museum, Pelham MSS., i, p. 308 (printed in Deputy Keeper's Report, N.I. Record Office (1936), p. 16) - Biggar, Francis Joseph; The Ulster Volunteers of '82: Their Medals, Badges, &c. Gillball Volunteers, Ulster Journal of Archaeology, Second Series, Vol. 5, No. 1 (Oct. 1898). - Maitland, W. H.; History of Magherafelt, page 13. Moyola Books, 1916, republished 1988. ISBN 0-9511836-2-1 - Bigger, Francis Joseph; The National Volunteers of Ireland, 1782, Ulster Journal of Archaeology, Second Series, Vol. 15, No. 2/3 (May 1909) - Longman, Hurst, Rees, Orme, and Brown: Miscellaneous works of the Right Honourable Henry Grattan, 1822 - Queen's University Belfast. "Act of Union". Retrieved 14 November 2011. Cite error: Invalid <ref>tag; name "QUB" defined multiple times with different content (see the help page). - Bardon, Jonathan; A History of Ulster, page 223. The Black Staff Press, 2005. ISBN 0-85640-764-X - Google Books – The Four Nations: A History of the United Kingdom, by Frank Welsh - Ulster Museum – Henry Joy McCracken's Volunteer Coat - Google Books – The New monthly magazine - Connolly, S.J., Oxford Companion to Irish History, page 611. Oxford University Press, 2007. ISBN 978-0-19-923483-7 - Thomas Camac, Robert Day and William Cathcart; The Ulster Volunteers of 1782: Their Medals, Badges, Flags, &c. (Continued), Ulster Journal of Archaeology, Second Series, Vol. 6, No. 1 (Jan. 1900). - Bartlett, Thomas (2010). Ireland: A History. Cambridge University Press. p. 190. ISBN 978-0-521-19720-5. - Timothy Bowman. Carson's Army, The Ulster Volunteer Force. 1910-22. Manchester University Press. pp. 16, 68. ISBN 9-780719073724. - Jackson, Alvin; Home Rule - An Irish History 1800-2000, page 120. Weidenfeld & Nicolson, 2003. ISBN 1-84212-724-1. Quote: The UVF was a direct inspiration for the Irish Volunteers, formed in November 1913 by those on the nationalist side who feared that Home Rule had stalled. - Kelly, M. J. (2006). The Fenian ideal and Irish nationalism, 1882–1916. Volume 4 of Irish historical monographs series. Boydell & Brewer Ltd,. pp. 213–214. ISBN 978-1-84383-204-1. Retrieved 3 February 2014. - "The Union". University College Cork. Retrieved 3 November 2011. - Charles Townshend, Easter 1916, The Irish Rebellion (2006), p18 - Townshend, Charles (1983). Political violence in Ireland: government and resistance since 1848. Oxford Historical Monographs. Clarendon Press. p. 295. ISBN 978-0-19-821753-4. Retrieved 14/01/10. Check date values in: - Day, Robert; On Three Gold Medals of the Irish Volunteers, The Journal of the Royal Society of Antiquaries of Ireland, Fifth Series, Vol. 10, No.4 (31 December 1900). - Rosie Cowan, Ireland correspondent (28 September 2002). "The rise and fall of Johnny Adair | UK news". London: The Guardian. Retrieved 25 October 2010. - Stewart, A.T.Q. (1998). A Deeper Silence: The Hidden Origins of the United Irishmen. Blackstaff, ISBN 0-85640-642-2. - Jackson, T.A. (1946). Ireland Her Own. Cobbett Press. - Curtis, Liz (1994). The Cause of Ireland: From the United Irishmen to Partition. Beyond the Pale Publications. ISBN 0-9514229-6-0. - F.X. Martin, T.W. Moody (1994). The Course of Irish History. Mercier Press. ISBN 1-85635-108-4. - Llwelyn, Morgan (2001). Irish Rebels. O'Brien Press. ISBN 0-86278-857-9. - Connolly, S.J., Oxford Companion to Irish History, Oxford University Press, 2007. ISBN 978-0-19-923483-7 - Kelly, M. J. (2006). The Fenian ideal and Irish nationalism, 1882–1916. Boydell & Brewer Ltd,.ISBN 9781843832041. - Townshend, Charles (1983). Political violence in Ireland: government and resistance since 1848. Clarendon Press. ISBN 978-0-19-821753-4.
https://en.wikipedia.org/wiki/Irish_Volunteers_(18th_century)
4.03125
National Research Council, The National Academies Video length 4:37 min.Learn more about Teaching Climate Literacy and Energy Awareness» See how this Video supports the Next Generation Science Standards» High School: 7 Disciplinary Core Ideas About Teaching Climate Literacy Other materials addressing 4f Other materials addressing 4e Other materials addressing 5b Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - A good video to show at the beginning of a unit on climate change. - May need to break video into sections because the information presented is very dense. - Students will need scaffolding. - High level - recommended for advanced classes. About the Science - Comments from expert scientist: The video provides clear evidence of rising surface temperatures over the past century. It refers to the multitude of observational records - including in-situ and satellite measurements of temperature, snow and ice cover - to make the case that the Earth is warming. About the Pedagogy - No supporting teaching resources with this video. Can download transcript as pamphlet. Technical Details/Ease of Use - Can change captions to other languages. - High quality video and resolution. - Whole series is here: http://www.youtube.com/playlist?annotation_id=annotation_392971&feature=iv&list=PL38EB9C0BC54A9EE2&src_vid=-IuVzcp39rs Next Generation Science Standards See how this Video supports: Disciplinary Core Ideas: 7 HS-ESS2.A1:Earth’s systems, being dynamic and interacting, cause feedback effects that can increase or decrease the original changes. HS-ESS2.A3:The geological record shows that changes to global and regional climate can be caused by interactions among changes in the sun’s energy output or Earth’s orbit, tectonic events, ocean circulation, volcanic activity, glaciers, vegetation, and human activities. These changes can occur on a variety of time scales from sudden (e.g., volcanic ash clouds) to intermediate (ice ages) to very long-term tectonic cycles. HS-ESS2.C1:The abundance of liquid water on Earth’s surface and its unique combination of physical and chemical properties are central to the planet’s dynamics. These properties include water’s exceptional capacity to absorb, store, and release large amounts of energy, transmit sunlight, expand upon freezing, dissolve and transport materials, and lower the viscosities and melting points of rocks. HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space. HS-ESS2.D2:Gradual atmospheric changes were due to plants and other organisms that captured carbon dioxide and released oxygen. HS-ESS2.D3:Changes in the atmosphere due to human activity have increased carbon dioxide concentrations and thus affect climate. HS-ESS2.E1:The many dynamic and delicate feedbacks between the biosphere and other Earth systems cause a continual co-evolution of Earth’s surface and the life that exists on it.
http://cleanet.org/resources/43784.html
4.1875
For decades, scientists and the public alike have wondered why some fireflies exhibit synchronous flashing, in which large groups produce rhythmic, repeated flashes in unison sometimes lighting up a whole forest at once. Now, UConns Andrew Moiseff, a professor in the Department of Physiology and Neurobiology in the College of Liberal Arts and Sciences, has conducted the first experiments on the purpose of this phenomenon. His results, reported in the journal Science, suggest that synchronous flashing encourages female fireflies recognition of suitable mates. There have been lots of really good observations and hypotheses about firefly synchrony, Moiseff says. But until now, no one has experimentally tested whether synchrony has a function. Moiseff has had an interest in fireflies since he was an undergraduate at Stony Brook University. There he met his current collaborator, Jonathan Copeland of Georgia Southern University, who was a graduate student at the time. When the two graduated, Moiseff moved on to pursue other research interests. But in 1992, Copeland received an enlightening phone call. He had commented in a paper that firefly synchrony was rare, and mostly seen in southeast Asia, says Moiseff. But a naturalist from Tennessee called him to say that each summer the fireflies at her summer cabin all flashed at the same time. Moiseff and Copeland flew down to the Great Smoky Mountains National Park to check out the fireflies and, says Moiseff, theyve been going back every year since. Fireflies which are actually a type of beetle produce bioluminescence as a mating tool, in which males display a species-specific pattern of flashes while cruising through the air, looking for females, says Moiseff. These patterns consist of one or more flashes followed by a characteristic pause, during which female fireflies, perched on leaves or branches, will produce a single response flash if they spot a suitable male. Of the roughly 2,000 species of fireflies around the world, scientists estimate that about 1 percent synchronize their flashes over large areas. Thousands of male fireflies may blink at once, creating a spectacular light show. In their current study, Moiseff and Copeland wondered what evolutionary benefit this species gains from synchronous flashing. The two hypothesized that males synchronize to facilitate the females ability to recognize the particular flashing pattern of their own species. To test this theory, they collected females of the synchronous species Photinus carolinus from the Smoky Mountains National Park and exposed them in the laboratory to groups of small blinking lights meant to mimic male fireflies. Each individual light produced the P. carolinus flashing pattern, but the experimenters varied the degree to which the flashes were in synch with one another. We had the technology to design something that we thought would create a virtual world for these females, says Moiseff. Their results showed that females responded more than 80 percent of the time to flashes that were in perfect unison or in near-perfect unison. But when the flashes were out of synch, the females response rate was 10 percent or less. Since synchronous species are often observed in high densities, Moiseff and Copeland concluded that their results suggest a physiological problem in the females information processing. Male fireflies are typically in flight while searching for females, so their flashes appear in different locations over time. Therefore, says Moiseff, females must be able to recognize visual cues over a large area. But, he points out, this behavior presents a problem in areas crowded with male fireflies. Instead of seeing a single flying male, the female would see a cluttered landscape of flashes that could be individually unrecognizable. When males are flashing in high densities, the females inability to focus on just one male would make it very difficult for her to detect her species-specific pattern, Moiseff says. So if the males synchronize, it can maintain the fidelity of the signal in the presence of many other males. Whether the females cant or simply choose not to discriminate spatial information on small scales is unclear, says Moiseff. His future research will focus on questions that address whether physiological constraints or behavioral decisions are driving the evolution of synchrony. Overall, says Moiseff, he is interested in the role that animal physiology plays in shaping evolution. Animals have evolved to solve unique problems in many different ways, and Im interested in how they do that, he says. Fireflies have these tiny heads and these tiny brains, but they can do some complex and amazing things. Explore further: Biologists expose hidden costs of firefly flashes
http://phys.org/news/2011-07-fireflies-synch.html
4.34375
Although the Jews were their primary targets, the Nazis and their collaborators also persecuted other groups for racial or ideological reasons. Among the earliest victims of Nazi discrimination in Germany were political opponents—primarily Communists, Socialists, Social Democrats, and trade union leaders. The Nazis also persecuted authors and artists whose works they considered subversive or who were Jewish, subjecting them to arrest, economic restrictions, and other forms of discrimination. The Nazis targeted Roma (Gypsies) on racial grounds. Roma were among the first to be killed in mobile gas vans at the Chelmno killing center in Poland. The Nazis also deported more than 20,000 Roma to the Auschwitz-Birkenau camp, where most of them were murdered in the gas chambers. The Nazis viewed Poles and other Slavic peoples as inferior. Poles who were considered ideologically dangerous (including intellectuals and Catholic priests) were targeted for execution. Between 1939 and 1945, at least 1.5 million Polish citizens were deported to German territory for forced labor. Hundreds of thousands were also imprisoned in Nazi concentration camps. It is estimated that the Germans killed at least 1.9 million non-Jewish Polish civilians during World War II. During the autumn and winter of 1941-1942 in the occupied Soviet Union, German authorities conducted a racist policy of mass murder of Soviet prisoners of war: Jews, persons with "Asiatic features," and top political and military leaders were selected out and shot. Around three million others were held in makeshift camps without proper shelter, food, or medicine with the deliberate intent that they die. In Germany, the Nazis incarcerated Christian church leaders who opposed Nazism, as well as thousands of Jehovah's Witnesses who refused to salute Adolf Hitler or to serve in the German army. Through the so-called “Euthanasia Program,” the Nazis murdered an estimated 200,000 individuals with mental or physical disabilities. The Nazis also persecuted male homosexuals, whose behavior they considered a hindrance to the preservation of the German nation.
http://www.ushmm.org/wlc/en/article.php?ModuleId=10007871
4.5
Algebra: In Simplest Terms In this series, host Sol Garfunkel explains how algebra is used for solving real-world problems and clearly explains concepts that may baffle many students. Graphic illustrations and on-location examples help students connect mathematics to daily life. The series also has applications in geometry and calculus instruction. 1. Introduction—An introduction to the series, this program presents several mathematical themes and emphasizes why algebra is important in today’s world. 2. The Language of Algebra—This program provides a survey of basic mathematical terminology. Content includes properties of the real number system and the basic axioms and theorems of algebra. Specific terms covered include algebraic expression, variable, product, sum term, factors, common factors, like terms, simplify, equation, sets of numbers, and axioms. 3. Exponents and Radicals—This program explains the properties of exponents and radicals: their definitions, their rules, and their applications to positive numbers. 4. Factoring Polynomials—This program defines polynomials and describes how the distributive property is used to multiply common monomial factors with the FOIL method. It covers factoring, the difference of two squares, trinomials as products of two binomials, the sum and difference of two cubes, and regrouping of terms. 5. Linear Equations—This is the first program in which equations are solved. It shows how solutions are obtained, what they mean, and how to check them using one unknown. 6. Complex Numbers—To the sets of numbers reviewed in previous lessons, this program adds complex numbers — their definition and their use in basic operations and quadratic equations. 7. Quadratic Equations—This program reviews the quadratic equation and covers standard form, factoring, checking the solution, the Zero Product Property, and the difference of two squares. 8. Inequalities—This program teaches students the properties and solution of inequalities, linking positive and negative numbers to the direction of the inequality. 9. Absolute Value—In this program, the concept of absolute value is defined, enabling students to use it in equations and inequalities. One application example involves systolic blood pressure, using a formula incorporating absolute value to find a person’s “pressure difference from normal.” 10. Linear Relations—This program looks at the linear relationship between two variables, expressed as a set of ordered pairs. Students are shown the use of linear equations to develop and provide information about two quantities, as well as the applications of these equations to the slope of a line. 11. Circle and Parabola—The circle and parabola are presented as two of the four conic sections explored in this series. The circle, its various measures when graphed on the coordinate plane (distance, radius, etc.), its related equations (e.g., center-radius form), and its relationships with other shapes are covered, as is the parabola with its various measures and characteristics (focus, directrix, vertex, etc.). 12. Ellipse and Hyperbola—The ellipse and hyperbola, the other two conic sections examined in the series, are introduced. The program defines the two terms, distinguishing between them with different language, equations, and graphic representations. 13. Functions—This program defines a function, discusses domain and range, and develops an equation from real situations. The cutting of pizza and encoding of secret messages provide subjects for the demonstration of functions and their usefulness. 14. Composition and Inverse Functions—Graphics are used to introduce composites and inverses of functions as applied to calculation of the Gross National Product. 15. Variation—In this program, students are given examples of special functions in the form of direct variation and inverse variation, with a discussion of combined variation and the constant of proportionality. 16. Polynomial Functions—This program explains how to identify, graph, and determine all intercepts of a polynomial function. It covers the role of coefficients; real numbers; exponents; and linear, quadratic, and cubic functions. This program touches upon factors, x-intercepts, and zero values. 17. Rational Functions—A rational function is the quotient of two polynomial functions. The properties of these functions are investigated using cases in which each rational function is expressed in its simplified form. 18. Exponential Functions—Students are taught the exponential function, as illustrated through formulas. The population of Massachusetts, the “learning curve,” bacterial growth, and radioactive decay demonstrate these functions and the concepts of exponential growth and decay. 19. Logarithmic Functions—This program covers the logarithmic relationship, the use of logarithmic properties, and the handling of a scientific calculator. How radioactive dating and the Richter scale depend on the properties of logarithms is explained 20. Systems of Equations—The case of two linear equations in two unknowns is considered throughout this program. Elimination and substitution methods are used to find single solutions to systems of linear and nonlinear equations. 21. Systems of Linear Inequalities—Elimination and substitution are used again to solve systems of linear inequalities. Linear programming is shown to solve problems in the Berlin airlift, production of butter and ice cream, school redistricting, and other situations while constraints, corner points, objective functions, the region of feasible solutions, and minimum and maximum values are also explored. 22. Arithmetic Sequences and Series—When the growth of a child is regular, it can be described by an arithmetic sequence. This program differentiates between arithmetic and nonarithmetic sequences as it presents the solutions to sequence- and series-related problems 23. Geometric Sequences and Series—This program provides examples of geometric sequences and series (f-stops on a camera and the bouncing of a ball), explaining the meaning of nonzero constant real number and common ratio. 24. Mathematical Induction—Mathematical proofs applied to hypothetical statements shape this discussion on mathematical induction. This segment exhibits special cases, looks at the development of number patterns, relates the patterns to Pascal’s triangle and factorials, and elaborates the general form of the theorem. 25. Permutations and Combinations—How many variations in a license plate number or poker hand are possible? This program answers the question and shows students how it’s done. 26. Probability—In this final program, students see how the various techniques of algebra that they have learned can be applied to the study of probability. The program shows that games of chance, health statistics, and product safety are areas in which decisions must be made according to our understanding of the odds. Instructional Video Resources Use our classroom videos for every curriculum and every grade level. Lending LibraryAccess our lending library and order form for video titles for all grade levels and subject areas. Find Us at the Following Events: March 15: Montpelier, ND, Family Literacy Event
http://www.prairiepublic.org/education/instructional-resources?post=13951
4.15625
Apollo 13 was the seventh mission of NASA's Project Apollo and the third manned lunar-lander mission. The flight was commanded by Jim Lovell. The other astronauts on board were Jack Swigert and Fred Haise. The craft was launched successfully toward the Moon, but two days after launch a faulty oxygen tank exploded, and the Service Module became damaged, causing a loss of oxygen and electrical power. There was a very large chance that the astronauts would die before they could return to Earth. They were very short of oxygen. Oxygen is not just used to breathe, on the Apollo spacecraft it was used in a device called a Fuel cell to generate electricity. So they conserved their remaining air by turning off almost all their electrical equipment, including heaters. It became very cold in the spacecraft. In order to stay alive the astronauts also had to move into the Apollo Lunar Module and make it work as a sort of "lifeboat". When they approached the Earth they were not sure that their parachutes, needed to slow the Command Module down, would work. The parachutes were thrown out by small explosive charges that were fired by batteries. The cold could have made the batteries fail, in which case the parachutes would not work and the Command Module would hit the ocean so fast that all aboard would be killed. The flight[change | change source] Apollo had taken to the Earth's orbit on the 11th April 1970 at 19:13 UTC. They flew from Cape Canaveral and they wanted to land at Fra Mauro. Despite the hardships, the crew made it back to Earth. In spite of the fact that crew did not land on the Moon, the flight became very well known. Some people regarded it as a failure because they did not land on the Moon. However, others thought it was possibly the National Aeronautics and Space Administrations (NASAs) greatest accomplishment in returning three men in a very damaged spacecraft back to Earth safely. Coming up to re-entry, it was thought that the electrical equipment would short circuit because the water in the astronauts' breath had turned back into a liquid all over the computers. However, the electronics were fine.
https://simple.wikipedia.org/wiki/Apollo_13
4.15625
A chance discovery of 80-year-old photo plates in a Danish basement is providing new insight into how Greenland glaciers are melting today. Researchers at the National Survey and Cadastre of Denmark -- that country's federal agency responsible for surveys and mapping -- had been storing the glass plates since explorer Knud Rasmussen's expedition to the southeast coast of Greenland in the early 1930s. In this week's online edition of Nature Geoscience, Ohio State University researchers and colleagues in Denmark describe how they analyzed ice loss in the region by comparing the images on the plates to aerial photographs and satellite images taken from World War II to today. Taken together, the imagery shows that glaciers in the region were melting even faster in the 1930s than they are today, said Jason Box, associate professor of geography and researcher at the Byrd Polar Research Center at Ohio State. A brief cooling period starting in the mid-20th century allowed new ice to form, and then the melting began to accelerate again in the 2000s. "Because of this study, we now have a detailed historical analogue for more recent glacier loss," Box said. "And we've confirmed that glaciers are very sensitive indicators of climate." Pre-satellite observations of Greenland glaciers are rare. Anders Anker Bjørk, doctoral fellow at the Natural History Museum of Denmark and lead author of the study, is trying to compile all such imagery. He found a clue in the archives of The Arctic Institute in Copenhagen in 2011. "We found flight journals for some old planes, and in them was a reference to National Survey and Cadastre of Denmark," Bjørk said. As it happens, researchers at the National Survey had already contacted Bjørk about a find of their own. "They were cleaning up in the basement and had found some old glass plates with glaciers on them. The reason the plates were forgotten was that they were recorded for mapping, and once the map was produced they didn't have much value." Those plates turned out to be documentation of Rasmussen's 7th Thule Expedition to Greenland. They contained aerial photographs of land, sea and glaciers in the southeast region of the country, along with travel photos of Rasmussen's team. The researchers digitized all the old images and used software to look for differences in the shape of the southeast Greenland coastline where the ice meets the Atlantic Ocean. Then they calculated the distance the ice front moved in each time period. Over the 80 years, two events stand out: glacial retreats from 1933-1934 and 2000-2010. In the 1930s, fewer glaciers were melting than are today, and most of those that were melting were land-terminating glaciers, meaning that they did not contact the sea. Those that were melting retreated an average of 20 meters per year -- the fastest retreating at 374 meters per year. Fifty-five percent of the glaciers in the study had similar or higher retreat rates during the 1930s than they do today. Still, more glaciers in southeast Greenland are retreating today, and the average ice loss is 50 meters per year. That's because a few glaciers with very fast melting rates -- including one retreating at 887 meters per year -- boost the overall average. But to Box, the most interesting part of the study is what happened between the two melting events. From 1943-1972, southeast Greenland cooled -- probably due to sulfur pollution, which reflects sunlight away from Earth. Sulfur dioxide is a poisonous gas produced by volcanoes and industrial processes. It has been tied to serious health problems and death, and is also the main ingredient in acid rain. Its presence in the atmosphere peaked just after the Clean Air Act was established in 1963. As it was removed from the atmosphere, the earlier warming resumed. The important point is not that deadly pollution caused the climate to cool, but rather that the brief cooling allowed researchers to see how Greenland ice responded to the changing climate. The glaciers responded to the cooling more rapidly than researchers had seen in earlier studies. Sixty percent of the glaciers advanced during that time, while 12 percent were stationary. And now that the warming has resumed, the glacial retreat is dominated by marine-terminating outlet glaciers, the melting of which contributes to sea level rise. "From these images, we see that the mid-century cooling stabilized the glaciers," Box said. "That suggests that if we want to stabilize today's accelerating ice loss, we need to see a little cooling of our own." Southeast Greenland is a good place to study the effects of climate change, he explained, because the region is closely tied to air and water circulation patterns in the North Atlantic. "By far, more storms pass through this region -- transporting heat into the Arctic -- than anywhere else in the Northern Hemisphere. Climate change brings changes in snowfall and air temperature that compete for influence on a glacier's net behavior," he said. Co-authors on the study include Kurt H. Kjær, Niels J. Korsgaard, Kristian K. Kjeldsen, and Svend Funder at the Natural History Museum of Denmark, University of Copenhagen; Shfaqat A. Khan of the National Space Institute, Technical University of Denmark; Camilla S. Andresen of the Department of Marine Geology and Glaciology at the Geological Survey of Denmark and Greenland; and Nicolaj K. Larsen of the Department of Geoscience at Aarhus University. Photos, satellite images and other data for the study were provided by the National Survey and Cadastre; The Scott Polar Research Institute in the United Kingdom; the Arctic Institute in Denmark; researchers Bea Csatho and Sudhagar Nagarajan of the Geology Department at the University at Buffalo; and the NASA Land Processes Distributed Active Archive Center at the USGS/Earth Resources Observation and Science Center of Sioux Falls, S.D. Andreas Pedersen of the Danish company MapWork wrote the script for the software used in the study. This work is a part of the RinkProject funded by the Danish Research Council and the Commission for Scientific Research in Greenland. Cite This Page:
http://www.sciencedaily.com/releases/2012/05/120529144339.htm
4.25
The resource has been added to your collection The lesson begins by associating the distance between two points with the right triangle that may be formed by joining the points and extending horizontal and vertical lines through the points. This linking is generalized to derive the distance formula for any two points in the plane. The midpoint formula is then derived by taking the average of the coordinates of the two points. Using the distance formula, the equation for circle is derived and then examples follow for finding the equation of a given circle. This resource has not yet been reviewed. Not Rated Yet.
http://www.curriki.org/oer/Lesson-26-The-Distance-and-Midpoint-Formulas/
4.03125
Middle Paleolithic Hominids< Introduction to Paleoanthropology - 1 The second phase of human migration - 2 Neanderthals - 3 Homo sapiens - 4 Out-of-Africa 2: The debate - 4.1 The "Multi-regional" model - 4.2 The "Out-of-Africa"/"Replacement" model - 4.3 Hypothesis testing - 4.4 Out-of-Africa 2: The evidence - 4.5 Fossil record - 4.6 Molecular biology - 4.7 Expectations - 4.8 Intermediate Model - 5 Case studies - 6 Population dispersal into Australia/Oceania - 7 Summary The second phase of human migrationEdit The time period between 250,000 and 50,000 years ago is commonly called the Middle Paleolithic. At the same time that Neanderthals occupied Europe and Western Asia, other kinds of people lived in the Far East and Africa, and those in Africa were significantly more modern than the Neanderthals. These Africans are thus more plausible ancestors for living humans, and it appears increasingly likely that Neanderthals were an evolutionary dead end, contributing few if any genes to historic populations. Topics to be covered in this chapter: - Summary of the fossil evidence for both the Neanderthals and some of their contemporaries; - Second phase of human migration ("Out-of-Africa 2" Debate) History of ResearchEdit In 1856, a strange skeleton was discovered in Feldhofer Cave in the Neander Valley ("thal" = valley) near Dusseldorf, Germany. The skull cap was as large as that of a present-day human but very different in shape. Initially this skeleton was interpreted as that of a congenital idiot. The Forbes Quarry (Gibraltar) female cranium (now also considered as Neanderthal) was discovered in 1848, eight years before the Feldhofer find, but its distinctive features were not recognized at that time. Subsequently, numerous Neanderthal remains were found in Belgium, Croatia, France, Spain, Italy, Israel and Central Asia. Anthropologists have been debating for 150 years whether Neanderthals were a distinct species or an ancestor of Homo sapiens sapiens. In 1997, DNA analysis from the Feldhofer Cave specimen showed decisively that Neanderthals were a distinct lineage. These data imply that Neanderthals and Homo sapiens sapiens were separate lineages with a common ancestor, Homo heidelbergensis, about 600,000 years ago. Unlike earlier hominids (with some rare exceptions), Neanderthals are represented by many complete or nearly complete skeletons. Neanderthals provide the best hominid fossil record of the Plio-Pleistocene, with about 500 individuals. About half the skeletons were children. Typical cranial and dental features are present in the young individuals, indicating Neanderthal features were inherited, not acquired. Morphologically the Neanderthals are a remarkably coherent group. Therefore they are easier to characterize than most earlier human types. Neanderthal skull has a low forehead, prominent brow ridges and occipital bones. It is long and low, but relatively thin walled. The back of the skull has a characteristic rounded bulge, and does not come to a point at the back. Cranial capacity is relatively large, ranging from 1,245 to 1,740 cc and averaging about 1,520 cc. It overlaps or even exceeds average for Homo sapiens sapiens. The robust face with a broad nasal region projects out from the braincase. By contrast, the face of modern Homo sapiens sapiens is tucked under the brain box, the forehead is high, the occipital region rounded, and the chin prominent. Neanderthals have small back teeth (molars), but incisors are relatively large and show very heavy wear. Neanderthal short legs and arms are characteristic of a body type that conserves heat. They were strong, rugged and built for cold weather. Large elbow, hip, knee joints, and robust bones suggest great muscularity. Pelvis had longer and thinner pubic bone than modern humans. All adult skeletons exhibit some kind of disease or injury. Healed fractures and severe arthritis show that they had a hard life, and individuals rarely lived past 40 years old. Neanderthals lived from about 250,000 to 30,000 years ago in Eurasia. The earlier ones, like at Atapuerca (Sima de Los Huesos), were more generalized. The later ones are the more specialized, "classic" Neanderthals. The last Neanderthals lived in Southwest France, Portugal, Spain, Croatia, and the Caucasus as recently as 27,000 years ago. The distribution of Neanderthals extended from Uzbekistan in the east to the Iberian peninsula in the west, from the margins of the Ice Age glaciers in the north to the shores of the Mediterranean sea in the south. South-West France (Dordogne region) is among the richest in Neanderthal cave shelters: - La Chapelle-aux-Saints; - La Ferrassie; - Saint-Césaire (which is one of the younger sites at 36,000). Other sites include: - Krapina in Croatia; - Saccopastore in Italy; - Shanidar in Iraq; - Teshik-Tash (Uzbekistan). The 9-year-old hominid from this site lies at the most easterly known part of their range. No Neanderthal remains have been discovered in Africa or East Asia. Chronology and GeographyEdit The time and place of Homo sapiens origin has preoccupied anthropologists for more than a century. For the longest time, many assumed their origin was in South-West Asia. But in 1987, anthropologist Rebecca Cann and colleagues compared DNA of Africans, Asians, Caucasians, Australians, and New Guineans. Their findings were striking in two respects: - the variability observed within each population was greatest by far in Africans, which implied the African population was oldest and thus ancestral to the Asians and Caucasians; - there was very little variability between populations which indicated that our species originated quite recently. The human within-species variability was only 1/25th as much as the average difference between human and chimpanzee DNA. The human and chimpanzee lineages diverged about 5 million years ago. 1/25th of 5 million is 200,000. Cann therefore concluded that Homo sapiens originated in Africa about 200,000 years ago. Much additional molecular data and hominid remains further support a recent African origin of Homo sapiens, now estimated to be around 160,000-150,000 years ago. The Dmanisi evidence suggests early Europeans developed in Asia and migrated to Europe creating modern Europeans with minor interaction with African Homo types. July 5 2002 issue of the journal Science and is the subject of the cover story of the August issue of National Geographic magazine. New Asian finds are significant, they say, especially the 1.75 million-year-old small-brained early-human fossils found in Dmanisi, Georgia, and the 18,000-year-old "hobbit" fossils (Homo floresiensis) discovered on the island of Flores in Indonesia. Such finds suggest that Asia's earliest human ancestors may be older by hundreds of thousands of years than previously believed, the scientists say. Robin Dennell, of the University of Sheffield in England, and Wil Roebroeks, of Leiden University in the Netherlands, describe their ideas in the December 22, 2005 issue of Nature. The fossil and archaeological finds characteristic of early modern humans are represented at various sites in East and South Africa, which date between 160,000 and 77,000 years ago. Herto (Middle Awash, Ethiopia)Edit In June 2003, publication of hominid remains of a new subspecies: Homo sapiens idaltu. Three skulls (two adults, one juvenile) are interpreted as the earliest near-modern humans: 160,000-154,000 BP. They exhibit some modern traits (very large cranium; high, round skull; flat face without browridge), but also retain archaic features (heavy browridge; widely spaced eyes). Their anatomy and antiquity link earlier archaic African forms to later fully modern ones, providing strong evidence that East Africa was the birthplace of Homo sapiens. In 1967, Richard Leakey and his team uncovered a partial hominid skeleton (Omo I), which had the features of Homo sapiens. Another partial fragment of a skull (Omo II) revealed a cranial capacity over 1,400 cc. Dating of shells from the same level gave a date of 130,000 years. Ngaloba, Laetoli area (Tanzania)Edit A nearly complete skull (LH 18) was found in Upper Ngaloba Beds. Its morphology is largely modern, yet it retains some archaic features such as prominent brow ridges and a receding forehead. Dated at about 120,000 years ago. Border Cave (South Africa)Edit Remains of four individuals (a partial cranium, 2 lower jaws, and a tiny buried infant) were found in a layer dated to at least 90,000 years ago. Although fragmentary, these fossils appeared modern. Klasies River (South Africa)Edit Site occupied from 120,000 to 60,000 years ago. Most human fossils come from a layer dated to around 90,000 years ago. They are fragmentary: cranial, mandibular, and postcranial pieces. They appear modern, especially a fragmentary frontal bone that lacks a brow ridge. Chin and tooth size also have a modern aspect. Blombos Cave (South Africa)Edit A layer dated to 77,000 BCE yielded 9 human teeth or dental fragments, representing five to seven individuals, of modern appearance. African skulls have reduced browridges and small faces. They tend to be higher, more rounded than classic Neanderthal skulls, and some approach or equal modern skulls in basic vault shape. Where cranial capacity can be estimated, the African skulls range between 1,370 and 1,510 cc, comfortably within the range of both the Neanderthals and anatomically modern people. Mandibles tend to have significantly shorter and flatter faces than did the Neanderthals. Postcranial parts indicate people who were robust, particularly in their legs, but who were fully modern in form. Out-of-Africa 2: The debateEdit Most anthropologists agree that a dramatic shift in hominid morphology occurred during the last glacial epoch. About 150,000 years ago the world was inhabited by a morphologically heterogeneous collection of hominids: Neanderthals in Europe; less robust archaic Homo sapiens in East Asia; and somewhat more modern humans in East Africa (Ethiopia) and also SW Asia. By 30,000 years ago, much of this diversity had disappeared. Anatomically modern humans occupied all of the Old World. In order to understand how this transition occurred, we need to answer two questions: - Did the genes that give rise to modern human morphology arise in one region, or in many different parts of the globe? - Did the genes spread from one part of the world to another by gene flow, or through the movement and replacement of one group of people by another? Unfortunately, genes don't fossilize, and we cannot study the genetic composition of ancient hominid populations directly. However, there is a considerable amount of evidence that we can bring to bear on these questions through the anatomical study of the fossil record and the molecular biology of living populations. The shapes of teeth from a number of hominid species suggest that arrivals from Asia played a greater role in colonizing Europe than hominids direct from Africa, a new analysis of more than 5,000 ancient teeth suggests. (Proceedings of the National Academy of Sciences Aug 2007) Two opposing hypotheses for the transition to modern humans have been promulgated over the last decades: - the "multi-regional model" sees the process as a localized speciation event; - the "out-of-Africa model" sees the process as the result of widespread phyletic transformation. The "Multi-regional" modelEdit This model proposes that ancestral Homo erectus populations throughout the world gradually and independently evolved first through archaic Homo sapiens, then to fully modern humans. In this case, the Neanderthals are seen as European versions of archaic sapiens. Recent advocates of the model have emphasized the importance of gene flow among different geographic populations, making their move toward modernity not independent but tied together as a genetic network over large geographical regions and over long periods of time. Since these populations were separated by great distances and experienced different kinds of environmental conditions, there was considerable regional variation in morphology among them. One consequence of this widespread phyletic transformation would be that modern geographic populations would have very deep genetic roots, having begun to separate from each other a very long time ago, perhaps as much as a million years. This model essentially sees multiple origins of Homo sapiens, and no necessary migrations. The "Out-of-Africa"/"Replacement" modelEdit This second hypothesis considers a geographically discrete origin, followed by migration throughout the rest of the Old World. By contrast with the first hypothesis, here we have a single origin and extensive migration. Modern geographic populations would have shallow genetic roots, having derived from a speciation event in relatively recent times. Hominid populations were genetically isolated from each other during the Middle Pleistocene. As a result, different populations of Homo erectus and archaic Homo sapiens evolved independently, perhaps forming several hominid species. Then, between 200,000 and 100,000 years ago, anatomically modern humans arose someplace in Africa and spread out, replacing other archaic sapiens including Neanderthals. The replacement model does not specify how anatomically modern humans usurped local populations. However, the model posits that there was little or no gene flow between hominid groups. If the "Multi-regional Model" were correct, then it should be possible to see in modern populations echoes of anatomical features that stretch way back into prehistory: this is known as regional continuity. In addition, the appearance in the fossil record of advanced humans might be expected to occur more or less simultaneously throughout the Old World. By contrast, the "Out-of-Africa Model" predicts little regional continuity and the appearance of modern humans in one locality before they spread into others. Out-of-Africa 2: The evidenceEdit Until relatively recently, there was a strong sentiment among anthropologists in favor of extensive regional continuity. In addition, Western Europe tended to dominate the discussions. Evidence has expanded considerably in recent years, and now includes molecular biology data as well as fossils. Now there is a distinct shift in favor of some version of the "Out-of-Africa Model". Discussion based on detailed examination of fossil record and mitochondrial DNA needs to address criteria for identifying: - regional continuity; - earliest geographical evidence (center of origin); - chronology of appearance of modern humans. The fossil evidence most immediately relevant to the origin of modern humans is to be found throughout Europe, Asia, Australasia, and Africa, and goes back in time as far as 300,000 years ago. Most fossils are crania of varying degrees of incompleteness. They look like a mosaic of Homo erectus and Homo sapiens, and are generally termed archaic sapiens. It is among such fossils that signs of regional continuity are sought, being traced through to modern populations. For example, some scholars (Alan Thorne) argue for such regional anatomical continuities among Australasian populations and among Chinese populations. In the same way, some others believe a good case can be made for regional continuity in Central Europe and perhaps North Africa. By contrast, proponents of a replacement model argue that, for most of the fossil record, the anatomical characters being cited as indicating regional continuity are primitive, and therefore cannot be used uniquely to link specific geographic populations through time. The equatorial anatomy of the first modern humans in Europe presumably is a clue to their origin: Africa. There are sites from the north, east and south of the African continent with specimens of anatomical modernity. One of the most accepted is Klasies River in South Africa. The recent discovery of remains of H. sapiens idaltu at Herto (Ethiopia) confirms this evidence. Does this mean that modern Homo sapiens arose as a speciation event in Eastern Africa (Ethiopia), populations migrating north, eventually to enter Eurasia? This is a clear possibility. The earlier appearance of anatomically moderns humans in Africa than in Europe and in Asia too supports the "Out-of-Africa Model". Just as molecular evidence had played a major role in understanding the beginnings of the hominid family, so too could it be applied to the later history, in principle. However, because that later history inevitably covers a shorter period of time - no more than the past 1 million years - conventional genetic data would be less useful than they had been for pinpointing the time of divergence between hominids and apes, at least 5 million years ago. Genes in cell nuclei accumulate mutations rather slowly. Therefore trying to infer the recent history of populations based on such mutations is difficult, because of the relative paucity of information. DNA that accumulates mutations at a much higher rate would, however, provide adequate information for reading recent population history. That is precisely what mitochondrial DNA (mtDNA) offers. MtDNA is a relatively new technique to reconstruct family trees. Unlike the DNA in the cell nucleus, mtDNA is located elsewhere in the cell, in compartments that produce the energy needed to keep cells alive. Unlike an individual's nuclear genes, which are a combination of genes from both parents, the mitochondrial genome comes only from the mother. Because of this maternal mode of inheritance, there is no recombination of maternal and paternal genes, which sometimes blurs the history of the genome as read by geneticists. Potentially, therefore, mtDNA offers a powerful way of inferring population history. MtDNA can yield two major conclusions relevant for our topic: the first addresses the depth of our genetic routes, the second the possible location of the origin of anatomically modern humans. - extensive genetic variation, implying an ancient origin, going back at least a million years (certainly around 1.8 million years ago); - no population would have significantly more variation than any other. Any extra variation the African population might have had as the home of Homo erectus would have been swamped by the subsequent million years of further mutation. - limited variation in modern mtDNA, implying a recent origin; - African population would display most variation. - If modern populations derive from a process of long regional continuity, then mtDNA should reflect the establishment of those local populations, after 1.8 million years ago, when populations of Homo erectus first left Africa and moved into the rest of the Old World. Yet the absence of ancient mtDNA in any modern living population gives a different picture. The amount of genetic variation throughout all modern human populations is surprisingly small, and implies therefore a recent origin for the common ancestor of us all. - Although genetic variation among the world's population is small overall, it is greatest in African populations, implying they are the longest established. - If modern humans really did evolve recently in Africa, and then move into the rest of the Old World where they mated with established archaic sapiens, the resulting population would contain a mixture of old and new mtDNA, with a bias toward the old because of the relative numbers of newcomers to archaic sapiens. Yet the evidence does not seem to support this view. The argument that genetic variation among widely separated populations has been homogenized by gene flow (interbreeding) is not tenable any more, according to population geneticists. Although these two hypotheses dominate the debate over the origins of modern humans, they represent extremes; and there is also room for several intermediate models. - One hypothesis holds that there might have been a single geographic origin as predicted by replacement model, but followed by migrations in which newcomers interbred with locally established groups of archaic sapiens. Thus, some of genes of Neanderthals and archaic H. sapiens may still exist in modern populations; - Another hypothesis suggests that there could have been more extensive gene flow between different geographic populations than is allowed for in the multi-regional model, producing closer genetic continuity between populations. Anatomically modern humans evolved in Africa, and then their genes diffused to the rest of the world by gene flow, not by migration of anatomically modern humans and replacement of local peoples. In any case the result would be a much less clearcut signal in the fossil record. Neanderthal fossils have been found in Israel at several sites: Kebara, Tabun, and Amud. For many years there were no reliable absolute dates. Recently, these sites were securely dated. The Neanderthals occupied Tabun around 110,000 years ago. However, the Neanderthals at Kebara and Amud lived 55,000 to 60,000 years ago. By contrast, at Qafzeh Cave, located nearby, remains currently interpreted as of anatomically modern humans have been found in a layer dated to 90,000 years ago. These new dates lead to the surprising conclusion that Neanderthals and anatomically modern humans overlapped - if not directly coexisted - in this part of the world for a very long time (at least 30,000 years). Yet the anatomical evidence of the Qafzeh hominid skeletons reveals features reminiscent of Neanderthals. Although their faces and bodies are large and heavily built by today's standards, they are nonetheless claimed to be within the range of living peoples. Yet, a recent statistical study comparing a number of measurements among Qafzeh, Upper Paleolithic and Neanderthal skulls found those from Qafzeh to fall in between the Upper Paleolithic and Neanderthal norms, though slightly closer to the Neanderthals. The Lagar Velho 1 remains, found in a rockshelter in Portugal dated to 24,500 years ago, correspond to the complete skeleton of a four-year-old child. This skeleton has anatomical features characteristic of early modern Europeans: - prominent chin and certain other details of the mandible; - small front teeth; - characteristic proportions and muscle markings on the thumb; - narrowness of the front of pelvis; - several aspects of shoulder and forearm bones. Yet, intriguingly, a number of features also suggest Neanderthal affinities: - the front of the mandible which slopes backward despite the chin; - details of the incisor teeth; - pectoral muscle markings; - knee proportions and short, strong lower-leg bones. Thus, the Lagar Velho child appears to exhibit a complex mosaic of Neanderthal and early modern human features. This combination can only have resulted from a mixed ancestry; something that had not been previously documented for Western Europe. The Lagar Velho child is interpreted as the result of interbreeding between indigenous Iberian Neanderthals and early modern humans dispersing throughout Iberia sometime after 30,000 years ago. Because the child lived several millennia after Neanderthals were thought to have disappeared, its anatomy probably reflects a true mixing of these populations during the period when they coexisted and not a rare chance mating between a Neanderthal and an early modern human. Population dispersal into Australia/OceaniaEdit Based on current data (and conventional view), the evidence for the earliest colonization of Australia would be as follows: - archaeologists have generally agreed that modern humans arrived on Australia and its continental islands, New Guinea and Tasmania, about 35,000 to 40,000 years ago, a time range that is consistent with evidence of their appearance elsewhere in the Old World well outside Africa; - all hominids known from Greater Australia are anatomically modern Homo sapiens; - emerging picture begins to suggest purposeful voyaging by groups possessed of surprisingly sophisticated boat-building and navigation skills; - the only major feature of early Greater Australia archaeology that does NOT fit comfortably with a consensus model of modern human population expansion in the mid-Upper Pleistocene is the lithic technology, which has a pronounced Middle, rather than Upper, Paleolithic cast. Over the past decade, however, this consensus has been eroded by the discovery and dating of several sites: - Malakunanja II and Nauwalabila I, located in Arnhem Land, would be 50,000 to 60,000 years old; - Jinmium yielded dates of 116,000 to 176,000 years ago. Yet these early dates reveal numerous problems related to stratigraphic considerations and dating methods. Therefore, many scholars are skeptical of their value. If accurate, these dates require significant changes in current ideas, not just about the initial colonization of Australia, but about the entire chronology of human evolution in the early Upper Pleistocene. Either fully modern humans were present well outside Africa at a surprisingly early date or the behavioral capabilities long thought to be uniquely theirs were also associated, at least to some degree, with other hominids. As a major challenge, the journey from Southeast Asia and Indonesia to Australia, Tasmania and New Guinea would have required sea voyages, even with sea levels at their lowest during glacial maxima. So far, there is no archaeological evidence from Australian sites of vessels that could have made such a journey. However, what were coastal sites during the Ice Age are mostly now submerged beneath the sea. Overall the evidence suggested by mitochondrial DNA is the following: - the amount of genetic variation in human mitochondrial DNA is small and implies a recent origin for modern humans; - the African population displays the greatest amount of variation; this too is most reasonably interpreted as suggesting an African origin.
https://en.m.wikibooks.org/wiki/Introduction_to_Paleoanthropology/Hominids_MiddlePaleolithic
4.125
Questioning and analyzing the American mindset during the middle to late nineteenth century Definition of Gothic (adjective) From the Germanic barbarian tribe, the Goths, derives a common term describing something that is crude, or uncivilized, or grotesque. “Gothic” describes architecture, literature, persons, and places. General Characteristics of A helpless victim Gothic characters in literature • Draw upon • Frighten or Stereotypical gothic characters include: The thief with a code of honor The lonely vampire The mad scientist The tormented artist The werewolf, horrified by himself The knowing madman The deformed assassin The ignored prophet America in the context of the The gothic writers explored the cultural anxieties and fears of the expanding nation: the “dark side”. These writers addressed such trends as --technological and scientific progress --individualism (free will, self made individual) --slavery and abolition The gothic writers critiqued the assumption that America stood as the moral and guiding light of the world (Winthrop’s “city on a hill”). Objectives of this unit include: • Recognizing characteristics of gothic writers (or dark romantics); • Understanding the fears and anxieties explored by the writers; • Evaluating the effectiveness of the writers’ critical positions; • Collecting evidence; • Expressing our individual opinion; Supporting that opinion in writing using • Following a specific rubric. Before You Go On the quarter sheet of white paper on your table— 1.Write your name; 2.Write two characteristics of A particular slide catching your eye? Clipping is a handy way to collect important slides you want to go back to later.
http://www.slideshare.net/gswider/the-gothic-period-in-american-literature
4.34375
Article, Book Resources - Grades: 3–5 The Cretaceous Period is the most recent period of the Mesozoic Era, spanning 77 million years, from 142 million to 65 million years ago. In 1822, Omalius d'Halloy termed the chalky rocks (Latin: "creta") found on the English and French sides of the English Channel "Cretaceous." The name is now applied to all materials deposited above Jurassic Period rocks but below Tertiary Period rocks. During the Cretaceous more land was submerged than at any time since the Ordovician Period. Tectonic plates moved apart, causing major mountain building, and continents began to resemble those existing today. The Cretaceous System is subdivided into the following stages from youngest to oldest: Upper Cretaceous includes Maastrichtian, Campanian, Santonian, Coniacian, Turonian, and Cenomanian; Lower Cretaceous includes Albian, Aptian, Barremian, Hauterivian, Valanginian, and Berriasian. These names, based on type localities where rocks at a given stratigraphic level were first studied, are widely used to refer to rock units or chronological times. Cretaceous rocks make up nearly 29% of the total area of Phanerozoic deposits on the Earth's landmasses. Because of the great extent of their outcrop area and presence in drill cores, Cretaceous rocks are the most intensively studied part of the Phanerozoic rock column. At one time Cretaceous seas covered 50% of the present North American continent. Thick Cretaceous sedimentary deposits form a narrow belt of outcrops from British Columbia to Central America. They extend throughout the Rocky Mountains and western Plains states, around the north edge of the Mississippi embayment, and along the southern and eastern sides of the Appalachian Mountains. Volcanoes in the western United States spread layers of volcanic ash, now turned into bentonite, in the center of the continent. Cretaceous rocks crop out on several Caribbean islands. This group extends along the Andes Mountain belt from Colombia to Cape Horn, and onto Antarctica. Cretaceous outcrops cover parts of central and southeastern South America. A vast volcanic province in the Paraná River basin of Uruguay and Brazil is mirrored across the Atlantic Ocean in the Etendeka province of western Africa, regions that were joined before South America and Africa separated. In Europe, Cretaceous chalk also crops out around the Paris basin. Other deposits are in Denmark, north and central Germany, central and southern France, the Pyrenees Mountains, the Alps, Italy, Slovenia, coastal Croatia, Bosnia and Hercegovina, and Yugoslavia. Russia and the southwestern Asian republics have Cretaceous rocks. There are Cretaceous rocks in west Africa and Mozambique. India's Cretaceous deposits include the Deccan traps, a large, thick sequence of volcanic rocks east of Bombay. Cretaceous rocks crop out in Thailand, Borneo, Japan, Australia, and New Zealand. In eastern North America the continental border gradually sank, shifting the shoreline inland. Shallow seas spread over the interiors of North America, western Europe, eastern Europe, western Russia, and the northern Arctic. They occupied central Australia and eastern Africa, eventually covering one-third of the present-day landmasses. Tectonic plates moved apart after the breakup of Pangaea in the middle Mesozoic. Previously, the drift of Africa relative to Europe had created the Tethys Sea, an ancestral Mediterranean Sea, with many island arcs, basins, and microcontinental fragments. It reached its maximum width in early Cretaceous time. Movements of Africa, Europe, and the small Adriatic tectonic plate caused subduction that consumed Tethys, while seafloor spreading led to widening of the North Atlantic Ocean. Europe remained connected to North America until 81 million years ago; Greenland separated from Europe 70 million years ago. As the Adriatic plate carrying Italy closed Tethys and collided with Austria and Switzerland, the Alps rose. The Alpine region also marked a subduction zone. The Mediterranean began to open to the south. Mountain building also took place for the Dinaric and Hellenic Alps during collisions of the Carpathian-Serbo-Macedonian and Apulian arcs, in Saudi Arabia and Oman, and in the Himalayas. Seafloor spreading in the Atlantic led to westward movement of the North American plate. Compression around the Pacific Ocean margin started the Cordilleran and Laramide orogenies and began the creation of the present-day Rockies, Sierra Nevada, California Coast Ranges, and Andes, mountain belts that extend the length of North and South America. Enormous amounts of granite formed within the crust as subduction carried continental sediments deep within the crust around the Pacific margin and between Africa and Europe and melted. This led to volcanism in many subduction zones. Mountain building also affected Japan and the Philippines. Marine invertebrates included many modern-looking pelecypods and gastropods. Widespread ammonite fossils have helped define Cretaceous stratigraphy. The shells, or tests, of abundant foraminifera created the chalk deposits that gave the period its name. Extensive coral reefs grew in the equatorial belt. Dinosaurs ruled the land, with sauropods, tyrannosaurs, duck-billed and horned dinosaurs prominent. Pterosaurs and toothed birds flew in the skies, and mosasaurs appeared in the seas. Primitive mammals included insectivorous marsupials. Deciduous trees and other angiosperms created modern-looking forests by the middle of the Cretaceous Period. The End of the Cretaceous A great extinction marked the end of the Cretaceous Period. The dinosaurs, ammonites, and many marine creatures abruptly disappeared. High concentrations of iridium and textural features of the boundary clay layer separating Cretaceous and Tertiary rocks support the idea of a meteorite impact producing a worldwide layer of debris at essentially the same moment at the very end of the Cretaceous Period. The huge Chicxulub Crater off the Yucatán Peninsula of Mexico is the likely point of impact. Some researchers favor a volcanic catastrophe or other series of events and a more gradual extinction of species and genera, but the impact theory has gained wide acceptance. William D. Romey Bibliography: Dott, R. H., Jr., and Prothero, D. R., Evolution of the Earth, 5th ed. (1993); Hsu, K. J., ed., Mesozoic and Cenozoic Oceans (1986); Moullade, M., and Nairn, A., eds., The Phanerozoic Geology of the World: Vol. 2, The Mesozoic (1983).
http://www.scholastic.com/teachers/article/cretaceous-period
4.25
OverviewDownload PDF of this Page Coral reefs are the ocean’s most diverse and complex ecosystems, supporting 25% of all marine life, including 800 species of reef-building corals and more than one million animal and plant species. They are close relatives of sea anemones and jellyfish, as each coral is a colony consisting of many individual sea anemone-like polyps that are all interconnected. Tropical coral reefs, found in warm, clear water at relatively shallow depths, are intricately patterned carpets of life growing on foundations formed primarily by calcium carbonate exoskeletons and coralline algae. These structures fuse over time, enlarging the reef and creating countless nooks and crannies. As the reef grows, species from nearly every major taxonomic group cover every square inch of these tightly integrated systems, providing food and shelter to a spectacular variety of fish and invertebrate species, including many of commercial value. ‘Hard’ corals use calcium carbonate from seawater to synthesize a hard, mineral protective shell around each polyp. These exoskeletons, along with shells formed by coralline algae, mollusks and tubeworms, spicules made by sponges, and shells of other calcifying species form the structural foundation of coral reefs. Corals catch plankton with their tentacles, but most of their nutrition comes from photosynthetic algae that live in their tissues, using the coral’s waste products for their own nutrition and feeding the corals with sugars and other nutritious compounds that leak through their cell membranes. Deep water reefs, formed by large, long-lived but fragile, soft corals are also architecturally and ecologically complex and teem with life, but lack a calcium carbonate foundation. Though beyond the reach of sunlight, underwater lights reveal them to be nearly as beautiful and colorful as their tropical counterparts. The condition of coral reefs is important to the Ocean Health Index because healthy reefs provide many benefits to people, including food, natural products, coastal protection from storms, jobs and revenue, tourism and recreation, biodiversity and others. 60% of reefs are already seriously damaged by local sources such as overfishing, destructive fishing, anchor damage, coral bleaching, coral mining, sedimentation, pollution, and disease. When these types of human threats are combined with the influence of rising ocean temperatures, 75% of reefs are threatened (Burke et al. 2011). How Was It Measured? The extent of coral reefs was derived from the 500m resolution dataset developed for Reefs at Risk Revisited (Burke et al. 2011), in conjunction with a re-sampled version of the Ocean Health Index EEZ regions. The condition of reefs was estimated using data for percentage cover by live coral determined from 12,634 surveys conducted from 1975-2006 and summarized by Bruno and Selig (2007) and Schutte et al. (2010). The reference point for coral reefs is the percent cover of coral reefs in 1975. The current Status is reported as: Current precent cover of coral reefs ÷ percent cover of coral reefs in 1975. Like all of the habitats used in the OHI, coral reefs are used as a component in calculating scores for many of the different goals. However, it is used differently depending on the goal in question. Although habitats such as coral reefs are used in calculating these goal scores, countries are not penalized for not having a certain habitat type. Calculations are based on the rank of existing habitats, as opposed to using all possible habitat types. The condition and extent of coral reefs can either be used as a direct measure, or indirectly in a supporting capacity. Coral reefs is used directly as a component in calculating Coastal Protection (Corals) and Biodiversity (Habitat: Coral). For these goals, the extent and condition of coral habitat factors directly into score calculations. Coral is also used as a component for the Natural Products goal, but is measured in a different way. Coral reefs is also used indirectly in calculating scores for Artisanal Fishing, Natural Products (Ornamental Fish), and Livelihoods and Economies (Aquarium Trade). For example, Artisanal Fishing: High Bycatch was assessed by looking at the presence of blast and poison fishing, practices that both degrade coral reefs. What Are The Impacts? that are exposed to elevated sea surface temperatures expel the symbiotic photosynthetic algae responsible for their nutrition and coloration [zooxanthellae] in a process known as coral bleaching. Corals can recover from occasional bleaching, but not from repeated bleaching. Increases in sea-surface temperature of about 1-3 °C are projected to result in more frequent coral bleaching events and widespread mortality. Elevated sea surface temperatures cause increased damage to reefs from breakage as storm frequency and intensity increase. Increasing amounts of carbon dioxide in the atmosphere cause increased amounts in surface waters, leading to ocean acidification (lowered pH). Acidification decreases the availability of calcium carbonate, making it harder for corals and other calcifying organisms to form their shells; it also dissolves existing shells. By the end of the century, it is predicted that ocean pH will drop from its current value of about 8.1 by as much as 0.4 units; by 2050, conditions will not be sufficient for the formation of calcium carbonate (Hoegh-Guldberg et al. 2007). Overfishing can seriously degrade the structure and health of coral reefs. If populations of algal grazers are reduced, aggressive algae can overgrow the reef. Reefs can also decline if overfishing reduces populations of fish that normally keep coral predators in check. Overfishing threatens more than 70% of coral reefs in the Caribbean (Burke et al. 2011). Most hard corals develop and grow very slowly, so recovery from damage caused by hurricanes, shipwrecks or anchors may take many years . Branching hard corals break more easily in storms or from physical contact, but may recover more quickly because they grow faster and each fragment can potentially form a new coral. HUMAN HEALTH IMPACT and affiliated sponges contain bioactive chemical compounds that can be useful as cancer and virus-fighting drugs. For example, AZT, a compound generated by a Caribbean reef sponge, is an antiretroviral drug that effectively slows the spread of the HIV virus. Read more here about medicines from coral reefs and other marine organisms. 500 million people depend on coral reefs for coastal protection, food, and tourism income (Wilkinson, 2008). Coral reefs help protect shorelines from storm damage and can absorb 70-90% of wave energy. The total net benefit per year of the world’s coral reefs was estimated in 2003 to be $29.8 billion. Tourism and recreation account for $9.6 billion of this amount, coastal protection for $9.0 billion, fisheries for $5.7 billion, and biodiversity for $5.5 billion. Costs of coral bleaching to tourism expressed in Net Present Value (NPV) were estimated at $10-$40 billion (Cesar, Burke and Pet-Soede, 2003, cited in Conservation International 2008). What Has Been Done? Coral reefs are globally widespread and many are threatened by climate change, ocean acidification, pollution.and other pressures. Scientists would be hard pressed to study them all, but ordinary citizens--including you-- can help. A recent study by the European Commission found that volunteers in the Reef Check program identified decreases in live coral cover and increases in the cover of rubble and sand about as well as did scientists from the University of Rhode Island, though their identifications and counts of fishes differed from the professional assessments. The study showed that citizen scientists can make valuable contributions to some aspects of long-term marine environmental monitoring. In 2010, the not-for-profit organization Nature Seychelles launched its Reef Rescuers Project. This project aims to restore coral reefs around the Seychelles. These reefs were damaged in a mass coral bleaching event caused by an El Niño event in 1998. Surviving coral fragments were selected and used to create an underwater nursery. These corals were raised on ropes for about a year, a method known as “coral gardening”, then transplanted to reefs off of Praslin. By 2015, Nature Seychelles had transplanted more than 24,000 corals. Now, Nature Seychelles has launched a program called the Coral Reef Rescuers Training Program. The program will provide scientific and practical knowledge about reef gardening and how to transplant the corals from underwater nurseries to sites that need restoration. In passing on their knowledge, Nature Seychelles hopes to give people the tools they need for large-scale restoration projects wherever coral reefs need help. Get More Information Coral Reef Alliance [CORAL] An international NGO founded to support local projects that benefit coral reefs and surrounding communities. International Coral Reef Initiative [ICRI] A partnership between non-government organizations, governments and other international organizations that works to implement international conventions and agreements. Reef Base: A Global Information System for Coral Reefs A source for coral reef data, publications, maps, and other resources from around the world hosted by World Fish Center Science to Action: Coral Health Index: Measuring Coral Community Health A guidebook to evaluating coral health and understanding impacts World Resources Institute [WRI]: Reefs at Risk Revisited A booklet detailing spatial and statistical data on current threats to coral reefs. Bruno, J. F. and E.R. Selig. (2007). Regional Decline of Coral Cover in the Indo-Pacific: Timing, Extent, and Subregional Comparisons. PLoS ONE 2, e711. Burke, L., K. Reytar, M. Spalding and A. Perry. (2011). Reefs at Risk Revisited. World Resources Institute: Washington D.C. Hoegh-Guldberg, O. et al. (2007). Coral reefs under rapid climate change and ocean acidification. Science 318, 1737–1742 (2007). Schutte, V. G. W., E.R. Selig and J.F. Bruno. (2010). Regional spatio-temporal trends in Caribbean coral reef benthic communities. Mar Ecol Prog Ser 402, 115–122. Spalding, M., C. Ravilious and E.P. Green. (2001). World Atlas of Coral Reefs.Prepared at the UNEP World Conservation Monitoring Centre. Berkeley,CA: University of California Press Wilkinson, C. (2008). Status of Coral Reefs of the World: 2008. Global Coral Reef Monitoring Network and Reef and Rainforest Research Center, Townsville, Australia.
http://www.oceanhealthindex.org/methodology/components/coral-reefs
4.03125
THE GULF STREAM, a relative newcomer on the geological scene, is an odd, fast-moving circulation of warm water that travels in an unfixed position, a few hundred miles north of Florida, up the east coast of the United States to Cape Hatteras, North Carolina, then onto Nantucket Island, before kicking eastward across the Atlantic Ocean to the British Isles. In this way, the Gulf Stream, a part of the western edge of the North Atlantic circulation, acts as a boundary that prevents the warm water of the Sargasso Sea from overflowing the colder, denser waters on the inshore side. The Gulf Stream is 1 of the strongest and most extensive known currents in the world, and it is separated from the United States by a narrow strip of cold water. The Gulf Stream, which can be as much as 50 mile (80 kilometer) wide and 1,300 foot (400 mile) deep, is caused by the northeast and southeast trade winds on the surface of the water and the equatorial currents that meet in the region of the windward islands of the Caribbean Sea. The Gulf Stream’s rival, the Kuroshio Current, located along the western edge of the Pacific Ocean and the coast of Japan, is part of a transpacific system that connects the North Pacific, California, and equatorial currents. The Gulf Stream triples in volume and is strengthened by the waters from the Florida Straits, by way of the Florida Current, and by currents coming from the northern and eastern coast of Puerto Rico and the Bahamas. It can travel more than 60 mile (96 kilometer) a day. The Gulf Stream, first mapped by Benjamin Franklin and his American whaling captain cousin Timothy Folger, early pioneers in using temperature in an attempt to define its boundaries, maintains its dimensions for nearly 1,000 mile (1,610 kilometer) up the East Coast of the United States. Franklin, using observations of current speed in the region of the Gulf Stream and plotting them on a chart, was able to draw a river traversing the Atlantic Ocean with speeds ranging from 4 to 2 knots. The strong carry power of the Gulf Stream’s warm equatorial layers of water has a notable, almost direct effect on the climate in various parts of the Earth. As the Gulf Stream moves pass Cape Hatteras, North Carolina, it begins to flow away from the East Coast of the United States. The altered flow of the Gulf Stream, known as meanders or eddies, separates the cold slope water to the north from warm Sargasso Sea water to the south. As the Gulf Stream flows into deeper water, it carries warm water to the North Atlantic region, as it enters the Norwegian Sea between the Faroe Islands and Great Britain. Thus, the Gulf Stream, which bathes northwest with warmer water and wind currents, is largely believed to be the reason for mild European climate. Too warm to encourage the kind of fish that are the main catch of North Atlantic waters, the Gulf Stream does bring well-developed specimens of tropical fish, like the Portuguese man-of-war jellyfish, much farther to north than they would normally venture. In addition, there are 2 particular species of plant (the double coconut tree indigenous to Jamaica) and animal life (the freshwater eels of Europe) that are carried for thousands of miles by the Gulf Stream surface transport system to the shores of Ireland and .
http://mytravelphoto.org/287-gulf-stream.html
4
An electroencephalogram (EEG) is a test used to evaluate the electrical activity in the brain. Brain cells communicate with each other through electrical impulses. An EEG can be used to help detect potential problems associated with this activity. The test tracks and records brain wave patterns. Small, flat metal discs called electrodes are attached to the scalp with wires. The electrodes analyze the electrical impulses in the brain and send signals to a computer, where the results are recorded. The electrical impulses in an EEG recording look like wavy lines with peaks and valleys. These lines allow doctors to quickly assess whether there are abnormal patterns. Any irregularities may be a sign of seizures or other brain disorders. An EEG is used to detect problems in the electrical activity of the brain that may be associated with certain brain disorders. The measurements given by an EEG are used to confirm or rule out various conditions, including: - seizure disorders (such as epilepsy) - a head injury - encephalitis (an inflammation of the brain) - a brain tumor - encephalopathy (a disease that causes brain dysfunction) - memory problems - sleep disorders When someone is in a coma, an EEG may be performed to determine the level of brain activity. The test can also be used to monitor activity during brain surgery. There are no risks associated with an EEG. The test is painless and safe. When someone has epilepsy or another seizure disorder, the stimuli presented during the test (such as a flashing light) may cause a seizure. However, the technician performing the EEG is trained to safely manage the situation should this occur. Before the test, you should take the following steps: - Wash your hair the night before the EEG, and don’t put any products (such as sprays or gels) in your hair on the day of the test. - Ask your doctor if you should stop taking any medications before the test. You should also make a list of your medications and give it to the technician performing the EEG. - Avoid consuming any food or drinks containing caffeine for at least eight hours prior to the test. Your doctor may ask you to sleep as little as possible the night before the test if you’re required to sleep during the EEG. You may also be given a sedative to help you to relax and sleep before the test begins. After the EEG is over, you can continue with your regular routine for the day. However, if you were given a sedative, the medication will remain in your system for a little while. This means that you’ll have to bring someone with you so they can take you home after the test. You’ll need to rest and avoid driving until the medication has worn off. An EEG measures the electrical impulses in your brain by using several electrodes that are attached to your scalp. An electrode is a conductor through which an electric current enters or leaves. The electrodes transfer information from your brain to a machine that measures and records the data. An EEG may be given at a hospital, at your doctor’s office, or at a laboratory by a specialized technician. It usually takes 30 to 60 minutes to complete. The test typically involves the following steps: - You’ll be asked to lie down on your back in a reclining chair or on a bed. - The technician will measure your head and mark where the electrodes will be placed. These spots are then scrubbed with a special cream that helps the electrodes get a high-quality reading. - The technician will put a sticky gel adhesive on 16 to 25 electrodes. They will then be attached to various spots on your scalp. - Once the test begins, the electrodes send electrical impulse data from your brain to the recording machine. This machine converts the electrical impulses into visual patterns that can be seen on a screen. These patterns are saved to a computer. - The technician may instruct you to do certain things while the test is in progress. They may ask you to lie still, close your eyes, breathe deeply, or look at stimuli (such as a flashing light or a picture). - After the test is complete, the technician will remove the electrodes from your scalp. During the test, very little electricity is passed between the electrodes and your skin, so you’ll feel very little to no discomfort. A neurologist (someone who specializes in nervous system disorders) interprets the recordings taken from the EEG and then sends the results to your doctor. Your doctor may schedule an appointment to go over the test results with you. Electrical activity in the brain is seen in an EEG as a pattern of waves. Different levels of consciousness, such as sleeping and waking, have a specific range of frequencies of waves per second that are considered normal. For example, the wave patterns move faster when you’re awake than when you’re asleep. The EEG will show if the frequency of waves or patterns are normal. Normal activity typically means you don’t have a brain disorder. Abnormal EEG results may be due to: - epilepsy or another seizure disorder - abnormal bleeding or hemorrhage - sleep disorder - encephalitis (swelling of the brain) - a tumor - dead tissue due to a blockage of blood flow - alcohol or drug abuse - head injury It’s very important to discuss your test results with your doctor. Before you review the results with them, it may be helpful to write down any questions you might want to ask. Be sure to speak up if there’s anything about your results that you don’t understand.
http://www.healthline.com/health/eeg
4.3125
This action might not be possible to undo. Are you sure you want to continue? 6.1 SEDIMENTARY BASIN BASIN AND PETROLEUM SYSTEM Sedimentary basins correspond to depressions in the upper parts of the Earth’s crust, generally occupied by a sea or an ocean. These depressions are initiated by geodynamic phenomena often associated with the displacement of lithosphere plates. The basement of the sedimentary basins is formed of crust made up of igneous rocks (granite on the continents and basalt in the oceans). Sedimentary rocks such as clays, sandstones, carbonates or massive salt have accumulated in these basins over geological time. Sedimentation generally involves a process extending over tens of millions of years, at a rate of several millimetres per year on average. Chiefly due to the weight of the deposits, the ongoing geodynamic processes and the accumulation of sediments lead to deformation and progressive sinking of the underlying crust. This accentuates the initial depression, giving rise. To a sedimentary filling that is often many kilometers thick. This deepening of the basin, which is known as subsidence, results from the combined effects of tectonic movements and sedimentary overburden. In extreme cases, subsidence can reach as much as 20 km. The tectonic setting is the premier criterion to distinguish different types of sedimentary basins: 1. Extensional basins occur within or between plates and are associated with increased heat flow due to hot mantle plumes. 2. Collisional basins occur where plates collide, either characterized by subduction of an oceanic plate or continental collision. 3. Transtensional basins occur where plates move in a strike-slip fashion relative to each other. 6.2 EXTENSIONAL BASINS Rift basins develop in continental crust and constitute the incipient extensional basin type. If the process continues, it will ultimately lead to the development of an ocean basin flanked by passive margins, alternatively an intracratonic basin will form. Rift basins consist of a graben or half graben separated from surrounding horsts by normal faults. They can be filled with both continental and marine deposits. Intracratonic basins develop when rifting ceases, which leads to lithospheric cooling due to reduced heat flow are commonly large but not very deep. EXTENSION • Proto-oceanic troughs form the transitional stage to the development of large ocean basins, and are underlain by incipient oceanic crust. • Passive margins develop on continental margins along the edges of ocean basins; subsidence is caused by lithospheric cooling and sediment loading, and depending on the environmental setting clastic or carbonate facies may dominate. • Ocean basins are dominated by pelagic deposition (biogenic material and clays) in the central parts and turbidities along the margins. SUMMER INTERNSHIP REPORT, ONGC 63 1) showing extensional type of basin.Fig (6. Fig(6.2) Showing Collisional type of basin. these basin formed during the subduction process. SUMMER INTERNSHIP REPORT. ONGC 64 . and retroarc foreland basins. their fill depends strongly on whether they are intra-oceanic or proximal to a continent. SUMMER INTERNSHIP REPORT. COLLISION • Forearc basins form between the accretionary prism and the volcanic arc and subside entirely due to sediment loading. and the sedimentary fill depends primarily on whether they are intra-oceanic or proximal to a continent. alluvial fans) adjacent to lacustrine or marine deposits. they sometimes form island chains.6. Fig (6. they are commonly filled with coarse facies (e. backarc basins. which typically exhibit a fill from deep marine through shallow marine to continental deposits.3) showing Transtension type of basin and their classifications.g. Collisional types of basins are shown in above figure (6. behind the volcanic arc. like trench basins. Accretionary prisms are ocean sediments that are scraped off the subducting plate. • Backarc basins are extensional basins that may form on the overriding plate. they are commonly filled with continental deposits. ONGC 65 . Lithospheric loading causes the development of peripheral foreland basins. • Continental collision leads to the creation of orogenic (mountain) belts. including trench basins. • Retroarc foreland basins form as a result of lithospheric loading behind a mountainous arc under a compressional regime. • Foreland basins can accumulate exceptionally thick (~10 km) stratigraphic successions.4 TRANSTENSION BASIN Strike-slip basins form in transtensional regimes and are usually relatively small but also deep. Trench basins can be very deep.2).. Several types of sedimentary basins can be formed due to subduction. forearc basins. 6.3 COLLISIONAL BASIN Subduction is a common process at active margins where plates collides and at least one oceanic plate is involved. evolution. plate tectonics. brines.6. Basin analysis encompasses many topics since it integrates several fields within geology. architecture and fill of a sedimentary basin by examining geological variables associated with the basin. Petroleum System. Purpose of Basin Analysis Determine the physical chronostratigraphic framework by interpreting sequences. age (chronostratigraphy). Tectonic history . SUMMER INTERNSHIP REPORT. fossil content (biostratigraphy). Prospect generation and evaluation. total subsidence and tectonic subsidence curves on sequence boundaries. It helps the exploration and development of energy. and the effects of thermal changes on these sediments.). It provides a foundation for extrapolating known information into unknown regions in order to predict the nature of the basin where evidence is not available.1 Basin formation and character. Content. ONGC 66 . • Relate changes in rates of tectonic subsidence curves to plate-tectonic events. age. Complete tectonostratigraphic analysis including: • Relate major transgressive-regressive facies cycles to tectonic events. etc. kind of basin. Description and correlation of stratigraphic basin fill(sequence stratigraphy). Stratigraphic framework can be expressed in terms of rock type (lithostratigraphy). The Importance of Basin Analysis on Petroleum Industry is Decided By Geographic location . processes and evolution. well logs.g. A basin model is built on a framework of geological surfaces that are correlated within the basin.4. systems tracts. and parasequences and/or simple sequences on outcrops. That may occur within sedimentary basins. Construct geohistory. thickness and facies of the sediments of primary petroleum concern. water. and seismic data and age date with high resolution biostratigraphy. Basins fill characteristics.4 BASIN ANALYSIS Basin analysis involves interpretation of the formation. such as the reservoir. mineral and other resources (e. Basin analysis techniques. But it emphasis on evaluation of strata that fill stratigraphic basins. The sedimentary history. cap rock and source beds. Major approaches: • • • • • • 6. or rock properties such as seismic velocity (seismic stratigraphy). The information generated at various stages may require re-interpretation. stratigraphic. surface geological data. a continuous process carried out in stages. with different geoscientific activities playing their pivotal role during stages of work.4. ONGC 67 . to locate favourable structural. which forms the major part of exploration activity. their distribution in space and time and the potential of the total basin are broadly indicated. Stratigraphic and sedimentological information obtained from wells and Seismic data helps in reconstructing the depositional history and inter relating the structural patterns with sedimentation. Stages of Basin Analysis (i) Initial Stage Analysis During the initial stages of exploration the broad framework of a basin may be worked out with the help of satellite imagery. is the search wit the help of the above exploration model. paleogeomorphic. sub-activities. The detailed scheme of basin analysis encompassing the major activities. Relate magmatism to tectonic subsidence curve. directed towards discovery of hydrocarbons.• • • • 6. Detailed lithological and paleoenvironmental studies. nature of sedimentary fill. With this background. working out depositional systems.2 Assign a cause to tectonically enhanced unconformities. aerial photo. when the tectonic framework. Map tectonostratigraphic units. magnetic and seismic data. paleogeomorphic and other subtle prospects for exploratory drilling. certain associational characteristics of hydrocarbon accumulation. Determine style and orientation of structures with tectonostratigraphic. structural and paleotectonic analysis. due to improvement in techniques or concepts or discovery of new plays in the basin. Basin analysis is thus. structural styles and habitat of oil/gas are better known and more refined and quantitative analysis becomes feasible. From knowledge of the worldwide sedimentary basins. The stratigraphic and record of the basin full is the basis for interpreting the casualties of hydrocarbon generation and accumulation. (ii) Middle Stage Analysis The second stage of basin analysis is reached during the advanced phases of hydrocarbon exploration. geochemical studies and identification of petroleum systems are the key elements at this stage. gravity. (iii) Final Stage Analysis The final stage is basin analysis. it is clear that basin analysis requires a synergistic approach. SUMMER INTERNSHIP REPORT. while the flow chart following it provides a broad overview. This results in a more precise definition of oil and gas generation and accumulation zones and their relationship with the stratigraphic and tectonic settings in various parts of the basin leading to a predictive exploration model. or more often a portion of a sedimentary basin.5. Fig(6. A petroleum system is a sedimentary basin. higher plants and bacteria that together make up the major part of our planet’s biomass.4) Showing a petroleum system corresponds to a sedimentary basin. reservoir rock. ONGC 68 . where we find all the essential geological and physico-chemical ingredients like source rock. it brings together the geological processes necessary for the formation and accumulation of oil and gas in deposits. It should be noted that this kind of rock.1 Elements of A Petroleum System (i) Source Rock The source rock. combined with progressive burial of the source rock and appropriate migration and trapping of the hydrocarbons. SUMMER INTERNSHIP REPORT. However. This subsidence caused by plate tectonic.5 PETROLEUM SYSTEM Sedimentary basins are the subsiding areas where sediments accumulate to form stratigraphic successions. the organic matter should account for at least 2% of the rock by weight). which is a clayey or carbonate sediment containing a large quantity of organic debris accumulated at the same time as the mineral constituents.6. rich in organic sedimentary matter (for it to be called a source rock. This organic material corresponds to the accumulation of more or less well preserved remains of organic tissues derived from populations of organisms. To allow significant quantities of organic matter to accumulate in a sediment. which combines all the essential structural and sedimentary ingredients: source rock. reservoir rock. is far from common and requires very special conditions. Hydrocarbons formed and entrapped within such sedimentary basins. not all sedimentary basins satisfy the necessary conditions for them to become oil bearing and this is where the concept of a petroleum system comes in. cap rock and trap. or most frequently some part of a basin. Thermal history of the source rock associated with its progressive burial and the appropriate migration of hydrocarbons and their entrapment. 6. In addition. These organisms are essentially planktonic algae. cap rock and trap. which are also referred to as reservoir rocks. where they are destroyed by chemical (oxidation) or biological (biodegradation) mechanisms. it acts as a petroleum and gas factory. Sedimentary environment must be devoid of oxygen (anoxic). generally made up of porous and permeable rocks. Conditions for Source Rock • • • • The depositional environment must be associated with an eco-system that produces a large amount of biomass (high biological productivity) (Pedersen and Calvert. sedimentary basins which lack intervals with sufficient quantities of organic matter cannot develop oil bearing deposits. to prevent decomposition of the organic matter by aerobic bacteria and consumption by benthic organisms (Demaison and Moore. petroleum products have a lower density than the water completely impregnating the sedimentary rocks). For example. In any case. Rocks rich in organic matter are most often clayey or marly (mixture of clay and limestone). (ii) Reservoir Rocks A system of drains. the cap can be a clayey rock or massive salt. These may take the form of closures around high points. Because of its impermeable character. 1980). of the rock volume. hydrocarbons can encounter flaws in the plumbing. in a way. going hand in hand with the weak circulation of water and anoxic conditions (Huc. in the same way as occurs during accidental oil pollution. This should be combined with a good preservation of the organic matter after the death of the organisms as well as during its incorporation into the sediment. ONGC 69 . (iii) Cap Rock A cap rock must be situated above the drains. 1990). These rocks have porosity ranging from 5 to 25%. the presence of impermeable barriers (breaks SUMMER INTERNSHIP REPORT. The hydrocarbons formed within the source rock are later expelled towards the reservoir. Due to their buoyancy. The absence of cap rocks results in the dispersal of the hydrocarbons in the sedimentary basin and their escape towards the surface. the hydrocarbons migrate towards the surface of the basin along sedimentary beds (in almost all cases. 1988). (iv) Traps During their migration towards the surface. These drains can also be considered as the plumbing of the petroleum system. the cap rock will confine the hydrocarbons to the porous and permeable system within which they are migrating. and even up to 30%. for example due to the fold geometry of an anticline. Hydrocarbons are formed by thermal decomposition of the fossilised organic matter contained within he rock. being fine grained with a low porosity and permeability. Sets of fractures or faults can also act as drains for the hydrocarbons.The source rock is an essential element in the petroleum system since. These properties result from their sedimentation in a low-energy environment. is partly explained by the contribution of fossil thermal energy and its progressive dissipation over time.in continuity of the drains caused by offsets in the sedimentary succession due to faults) or a deterioration in the drain quality (loss of permeability). These traps are called structural or stratigraphic according to whether their main cause is the deformation of the porous layers (folding or faulting) or lateral variations of porosity and permeability in the sediments. This thermal flux from the Earth is manifested by a progressive increase of temperature with depth in the sediments of about 30°C/km (known as the geothermal gradient). 6.(a) Fault (b) Anticline (c) Unconformity (d) Pinchout. This increase in temperature during the subsidence of the source rock prompts the transformation of part of the organic matter that is present into petroleum and gas.5 Ga ago.2 EVOLUTION OF A PETROLEUM SYSTEM Apart from containing these essential ingredients.5) showing most suitable structure for entrapment of hydrocarbon. ONGC 70 . The increase of temperature with depth. SUMMER INTERNSHIP REPORT. originating from the time of the Earth’s formation during the accretion of planetesimals around 4.5. The remaining contribution comes from the thermal energy released due to the continual decay of radioactive elements naturally present in the Earth’s crust. the petroleum system should be seen as a whole entity functioning in a dynamic framework. This contribution accounts for about a half of the heat flow. which is well known to miners. (a) (b) (a) (d) Fig (6. These situations create zones of accumulation of hydrocarbon bearing fluids that correspond to the deposits from which oil operators can extract crude oils and gases. Over the course of geological time (generally some tens of millions of years) the source rock in a subsiding basin will become buried and its temperature will rise. SUMMER INTERNSHIP REPORT. this arrival of oil at shallow depth leads to the formation of enormous superficial accumulations impregnating the exposed rocks. a thermal history is required that involves progressive heating up of the source rocks. This corresponds to a kinetic phenomenon that depends on both temperature and time. until they eventually encounter a trap where they can accumulate. with rising temperature. In some cases. Head et al.2003). (ii) Entrapment in Reservoir Rock This is just what happens when a sponge (source rock) is pressed between two porous bricks (surrounding reservoir rocks). The analysis of seepages. These seepages can be found in most of the petroleum provinces which are currently active. The large molecules characterizing the initial solid organic matter are split up into smaller molecules that make up a liquid called petroleum. these molecules are themselves reduced in size. activated by thermal energy. Oils reaching the surface in this way. a difference due to the greater compressibility of the former. leads to the breaking of chemical bonds and the production of chemical species of lower and lower molecular weight. shows). They were exploited throughout Antiquity as a source of bitumen. In such cases. hydrocarbons will occur as the natural localized emanations of oil or gas (seepage. ONGC 71 . thus forming a gas. are altered by bacteria. the hydrocarbons migrate towards the surface of the basin along the drains (secondary migration). as well as by the early explorationists who used them to locate oil deposits. For petroleum to be formed. In fact. This involves a phenomenon of which. Then.(i) Cracking It is described as entering the oil window and then the gas window. 1984. The hydrocarbons formed in this way are expelled from the source rock (primary migration). when they exist. this displacement is governed by the difference in pressure between the source rock and the drains. After expulsion from the source rock. that render the oils very viscous (Connan.. Hydrocarbons will eventually find their way to the surface if they are not held back by the traps or if the cover forms an inadequate seal. still forms part of the panoply of modern-day oil explorationists. or which accumulate at shallow depths. This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/48096687/Basin-and-Petroleum-System
4.1875
A fun, easy-to-implement collection of activities that give elementary and middle-school students a real understanding of key math concepts Math is a difficult and abstract subject for many students, yet teachers need to make sure their students comprehend basic math concepts. This engaging activity book is a resource teachers can use to give students concrete understanding of the math behind the questions on most standardized tests, and includes information that will give students a firm grounding to work with more advanced math concepts. Math Wise! is a key resource for teachers who want to teach their students the fundamentals that drive math problems. Have you read this book? Be the first to write a review buy now for
http://www.moviemars.com/professional-and-technical/book-math-wise-9780470471999.htm
4.09375
|This article needs additional citations for verification. (August 2011)| A scientific control is an experiment or observation designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. An example of a scientific control (sometimes called an "experimental control") might be testing plant fertilizer by giving it to only half the plants in a garden: the plants that receive no fertilizer are the control group, because they establish the baseline level of growth that the fertilizer-treated plants will be compared against. Without a control group, the experiment cannot determine whether the fertilizer-treated plants grow more than they would have if untreated. Ideally, all variables in an experiment will be controlled (accounted for by the control measurements) and none will be uncontrolled. In such an experiment, if all the controls work as expected, it is possible to conclude that the experiment is working as intended and that the results of the experiment are due to the effect of the variable being tested. That is, scientific controls allow an investigator to make a claim like "Two situations were identical until factor X occurred. Since factor X is the only difference between the two situations, the new outcome was caused by factor X." There are many forms of controlled experiments. A relatively simple one separates research subjects or biological specimens into two groups: an experimental group and a control group. No treatment is given to the control group, while the experimental group is changed according to some key variable of interest, and the two groups are otherwise kept under the same conditions. Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used in SDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence of confounding variables) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality. For example, if a researcher feeds an experimental artificial sweetener to sixty laboratory rats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, perhaps the rats were simply not supplied with enough food or water, or the water was contaminated and undrinkable, or the rats were under some psychological or physiological stress, etc. Eliminating each of these possible explanations individually would be time-consuming and difficult. However, if a control group is used that does not receive the sweetener but is otherwise treated identically, any difference between the two groups can be ascribed to the sweetener itself with much greater confidence. Types of control The simplest types of control are negative and positive controls, and both are found in many different types of experiments. These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected. Negative controls are groups where no phenomenon is expected. They ensure that there is no effect when there should be no effect. To continue with the example of drug testing, a negative control is a group that has not been administered the drug of interest. This group receives either no preparation at all or a sham preparation (that is, a placebo), either an excipient-only (also called vehicle-only) preparation or the proverbial "sugar pill." We would say that the control group should show a negative or null effect. In an example where there are only two possible outcomes, positive and negative, then if the treatment group and the negative control both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that a confounding variable acted on the experiment, and the positive results are not due to the treatment. In other examples, outcomes might be measured as lengths, times, percentages, and so forth. For the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to the placebo effect, and this result sets the baseline which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group. Positive controls are groups where a phenomenon is expected. That is, they ensure that there is an effect when there should be an effect, by using an experimental treatment that is already known to produce that effect (and then comparing this to the treatment that is being investigated in the experiment). Positive controls are often used to assess test validity. For example, to assess a new test's ability to detect a disease (its sensitivity), then we can compare it against a different test that is already known to work. The well-established test is the positive control, since we already know that the answer to the question (whether the test works) is yes. Similarly, in an enzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity. If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effectiveness as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did. When possible, multiple positive controls may be used — if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, a standard curve may be produced by making many different samples with different quantities of the enzyme. In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting for systematic errors. For example, in experiments where crop yield is affected (e.g. soil fertility), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield. In blind experiments, at least some information is withheld from participants in the experiments (but not the experimenter). For example, to evaluate the success of a medical treatment, an outside expert might be asked to examine blood samples from each of the patients without knowing which patients received the treatment and which did not. If the expert's conclusions as to which samples represent the best outcome correlates with the patients who received the treatment, this allows the experimenter to have much higher confidence that the treatment is effective. In double-blind experiments, at least some participants and some experimenters do not possess full information while the experiment is being carried out. Double-blind experiments are most often used in clinical trials of medical treatments, to verify that the supposed effects of the treatment are produced only by the treatment itself. Trials are typically randomized and double-blinded, with two (statistically) identical groups of patients being compared. The treatment group receives the treatment, and the control group receives a placebo. The placebo is the "first" blind, and controls for the patient expectations that come with taking a pill, which can have an effect on patient outcomes. The "second" blind, of the experimenters, controls for the effects on patient expectations due to unintentional differences in the experimenters' behavior. Since the experimenters do not know which patients are in which group, they cannot unconsciously influence the patients. After the experiment is over, they then "unblind" themselves and analyse the results. In clinical trials involving a surgical procedure, a sham operated group is used to ensure that the data reflect the effects of the experiment itself, and are not a consequence of the surgery. In this case, double blinding is achieved by ensuring that the patient does not know whether their surgery was real or sham, and that the experimenters who evaluate patient outcomes are different from the surgeons and do not know which patients are in which group. - False positive - False negative - Designed experiment - Controlling for a variable - James Lind cured scurvy using a controlled experiment that has been described as the first clinical trial. - Wait list control group - Life, Vol. II: Evolution, Diversity and Ecology: (Chs. 1, 21-33, 52-57). W. H. Freeman. 1 December 2006. p. 15. ISBN 978-0-7167-7674-1. Retrieved 14 February 2015. - Johnson PD, Besselsen DG (2002). "Practical aspects of experimental design in animal research" (PDF). ILAR J 43 (4): 202–6. PMID 12391395. - James Lind (1753). A Treatise of the Scurvy. PDF - Simon, Harvey B. (2002). The Harvard Medical School guide to men's health. New York: Free Press. p. 31. ISBN 0-684-87181-5.
https://en.wikipedia.org/wiki/Control_experiment
4.03125
August 30, 2013 New Wildfire Insights May Change Climate Predictions Brett Smith for redOrbit.com - Your Universe Online In the study, which was publish in Nature Communications, scientists examined two types of particles taken from the 2011 Las Conchas fire in New Mexico: soot – similar to diesel exhaust, and tar balls – small, round organic blobs that are abundant during a biomass fire. The team determined that tar balls made up 80 percent of the particles from the Las Conchas fire. Using a field emission scanning electron microscope, the team was able to enhance the differences among the various particles. Under the microscope, tar balls were seen as either “dark” or “bright.” The two types have a differing impact on climate change since they absorb and scatter radiation from the sun differently. The team was able to identify four categories of soot, ranging from bare to heavily-coated. Each type of soot particle has different optical properties. To better understand the particles’ composition and properties, the scientists heated tar balls and soot in a special chamber, essentially baking off their exterior. The scientists said that determining how these particles affect climate would mean understanding much more than how much heat they can retain. For instance, water vapor condenses more easily on oxidized particles to eventually form clouds. The researchers added that they are not yet able to determine what role these particles play with respect to climate. “We don’t have an answer to that,” said study author Claudio Mazzoleni, an associate professor of physics at Michigan Technological University. “The particles might be warming in and of themselves, but if they don’t let solar radiation come down through the atmosphere, they could cool the surface. They may have strong effects, but at this point, it’s not wise to say what.” “However, our study does provide modelers new insights on the smoke particle properties, and accounting for these properties in models might provide an answer to that question,” he added. “The big thing we learned is that we should not forget about tar balls in climate models,” said co-author Swarup China, a graduate student at MTU, “especially since those models are predicting more and more wildfires.” The study findings are being published just as California’s Rim fire has been declared the fifth-largest wildfire in state history. According to reports, the fire has grown to almost 200,000 acres. Some of the acreage consumed in the blaze can be attributed to backfire operations by firefighters. The technique involves lighting low-intensity fires to rob the main fire of potential fuel. “The fire is not having erratic growth like it was before,” reported Alison Hesterly, a Rim fire information officer. “And the forward spread of the fire is slowing, which is a good thing.”
http://www.redorbit.com/news/science/1112936124/new-wildfire-insights-change-climate-predictions-083013/
4.15625
Dominions were semi-independent polities that were nominally under The Crown, constituting the British Empire and later the British Commonwealth, beginning in the later part of the 19th century. They included Canada, Australia, Pakistan, India, Ceylon (Sri Lanka), New Zealand, Newfoundland, South Africa, and the Irish Free State. The Balfour Declaration of 1926 recognised the Dominions as "autonomous Communities within the British Empire". In 1931 the Statute of Westminster recognized the Dominions as fully sovereign from the United Kingdom, with which they shared a common allegiance to the Crown. The Dominions and later constitutional monarchies within the Commonwealth of Nations maintained the same royal house and royal succession from before full sovereignty, and became known after the year 1953 as Commonwealth realms. Earlier usage of dominion to refer to a particular territory dates back to the 16th century and was sometimes used to describe Wales from 1535 to around 1800. - 1 Definition - 2 Historical development - 3 Dominions - 4 Foreign relations - 5 From Dominions to Commonwealth realms - 6 See also - 7 Notes - 8 References In English common law, the Dominions of the British Crown were all the realms and territories under the sovereignty of that Crown. For example, the Order in Council annexing the island of Cyprus in 1914 declared that, from 5 November, the island "shall be annexed to and form part of His Majesty's Dominions". Use of dominion to refer to a particular territory dates back to the 16th century and was sometimes used to describe Wales from 1535 to around 1800: for instance, the Laws in Wales Act 1535 applies to "the Dominion, Principality and Country of Wales". Dominion, as an official title, was conferred on the Colony of Virginia about 1660 and on the Dominion of New England in 1686. These dominions never had self-governing status. The creation of the short-lived Dominion of New England was designed—contrary to the purpose of later dominions—to increase royal control and to reduce the colony's self-government. Under the British North America Act 1867, what is now eastern Canada received the status of "Dominion" upon the Confederation of several British possessions in North America. However, it was at the Colonial Conference of 1907 when the self-governing colonies of Canada and the Commonwealth of Australia were referred to collectively as Dominions for the first time. Two other self-governing colonies—New Zealand and Newfoundland—were granted the status of Dominion in the same year. These were followed by the Union of South Africa in 1910 and the Irish Free State in 1922. At the time of the founding of the League of Nations in 1924, the League Covenant made provision for the admission of any "fully self-governing state, Dominion, or Colony", the implication being that "Dominion status was something between that of a colony and a state". Dominion status was formally defined in the Balfour Declaration of 1926, which recognised these countries as "autonomous Communities within the British Empire", thus acknowledging them as political equals of the United Kingdom. The Statute of Westminster 1931 converted this status into legal reality, making them essentially independent members of what was then called the British Commonwealth. Following the Second World War, the decline of British colonialism led to Dominions generally being referred to as Commonwealth realms and the use of the word dominion gradually diminished. Nonetheless, though disused, it remains Canada's legal title and the phrase Her Majesty's Dominions is still used occasionally in legal documents in the United Kingdom. The word dominions originally referred to the possessions of the Kingdom of England. Oliver Cromwell's full title in the 1650s was "Lord Protector of the Commonwealth of England, Scotland and Ireland, and the dominions thereto belonging". In 1660, King Charles II gave the Colony of Virginia the title of dominion in gratitude for Virginia's loyalty to the Crown during the English Civil War. The Commonwealth of Virginia, a State of the United States, still has "the Old Dominion" as one of its nicknames. Dominion also occurred in the name of the short-lived Dominion of New England (1686–1689). In all of these cases, the word dominion implied no more than being subject to the English crown. Responsible government: precursor to Dominion status The foundation of "Dominion" status followed the achievement of internal self-rule in British Colonies, in the specific form of full responsible government (as distinct from "representative government"). Colonial responsible government began to emerge during the mid-19th century. The legislatures of Colonies with responsible government were able to make laws in all matters other than foreign affairs, defence and international trade, these being powers which remained with the Parliament of the United Kingdom. Bermuda, notably, was never defined as a Dominion, despite meeting this criteria, but as a self-governing colony that remains part of the British Realm. Nova Scotia soon followed by the Province of Canada (which included modern southern Ontario and southern Quebec) were the first Colonies to achieve responsible government, in 1848. Prince Edward Island followed in 1851, and New Brunswick and Newfoundland in 1855. All except for Newfoundland and Prince Edward Island agreed to form a new federation named Canada from 1867. This was instituted by the British Parliament in the British North America Act 1867. (See also: Canadian Confederation). Section 3 of the Act referred to the new entity as a "Dominion", the first such entity to be created. From 1870 the Dominion included two vast neighbouring British territories that did not have any form of self-government: Rupert's Land and the North-Western Territory, parts of which later became the Provinces of Manitoba, Saskatchewan, Alberta, and the separate territories, the Northwest Territories, Yukon and Nunavut. In 1871, the Crown Colony of British Columbia became a Canadian province, Prince Edward Island joined in 1873 and Newfoundland in 1949. The conditions under which the four separate Australian colonies—New South Wales, Tasmania, Western Australia, South Australia—and New Zealand could gain full responsible government were set out by the British government in the Australian Constitutions Act 1850. The Act also separated the Colony of Victoria (in 1851) from New South Wales. During 1856, responsible government was achieved by New South Wales, Victoria, South Australia, and Tasmania, and New Zealand. The remainder of New South Wales was divided in three in 1859, a change that established most of the present borders of NSW; the Colony of Queensland, with its own responsible self-government, and the Northern Territory (which was not granted self-government prior to federation of the Australian Colonies). Western Australia did not receive self-government until 1891, mainly because of its continuing financial dependence on the UK Government. After protracted negotiations (that initially included New Zealand), six Australian colonies with responsible government (and their dependent territories) agreed to federate, along Canadian lines, becoming the Commonwealth of Australia, in 1901. In South Africa, the Cape Colony became the first British self-governing Colony, in 1872. (Until 1893, the Cape Colony also controlled the separate Colony of Natal.) Following the Second Boer War (1899–1902), the British Empire assumed direct control of the Boer Republics, but transferred limited self-government to Transvaal in 1906, and the Orange River Colony in 1907. The Commonwealth of Australia was recognised as a dominion in 1901, and the Dominion of New Zealand and the Dominion of Newfoundland were officially given Dominion status in 1907, followed by the Union of South Africa in 1910. Canadian Confederation and evolution of the term Dominion In connection with proposals for the future government of British North America, use of the term "Dominion" was suggested by Samuel Leonard Tilley at the London Conference of 1866 discussing the confederation of the Province of Canada (subsequently becoming the Province of Ontario and the Province of Quebec), Nova Scotia and New Brunswick into "One Dominion under the Name of Canada", the first federation internal to the British Empire. Tilley's suggestion was taken from the 72nd Psalm, verse eight, "He shall have dominion also from sea to sea, and from the river unto the ends of the earth", which is echoed in the national motto, "A Mari Usque Ad Mare". The new government of Canada under the British North America Act of 1867 began to use the phrase "Dominion of Canada" to designate the new, larger nation. However, neither the Confederation nor the adoption of the title of "Dominion" granted extra autonomy or new powers to this new federal level of government. Senator Eugene Forsey wrote that the powers acquired since the 1840s that established the system of responsible government in Canada would simply be transferred to the new Dominion government: By the time of Confederation in 1867, this system had been operating in most of what is now central and eastern Canada for almost 20 years. The Fathers of Confederation simply continued the system they knew, the system that was already working, and working well. The constitutional scholar Andrew Heard has established that Confederation did not legally change Canada's colonial status to anything approaching its later status of a Dominion. At its inception in 1867, Canada's colonial status was marked by political and legal subjugation to British Imperial supremacy in all aspects of government—legislative, judicial, and executive. The Imperial Parliament at Westminster could legislate on any matter to do with Canada and could override any local legislation, the final court of appeal for Canadian litigation lay with the Judicial Committee of the Privy Council in London, the Governor General had a substantive role as a representative of the British government, and ultimate executive power was vested in the British Monarch—who was advised only by British ministers in its exercise. Canada's independence came about as each of these sub-ordinations was eventually removed. Heard went on to document the sizeable body of legislation passed by the British Parliament in the latter part of the 19th century that upheld and expanded its Imperial supremacy to constrain that of its colonies, including the new Dominion government in Canada. When the Dominion of Canada was created in 1867, it was granted powers of self-government to deal with all internal matters, but Britain still retained overall legislative supremacy. This Imperial supremacy could be exercised through several statutory measures. In the first place, the British North America Act of 1867 provided in Section 55 that the Governor General may reserve any legislation passed by the two Houses of Parliament for "the signification of Her Majesty's pleasure", which is determined according to Section 57 by the British Monarch in Council. Secondly, Section 56 provides that the Governor General must forward to "one of Her Majesty's Principal Secretaries of State" in London a copy of any Federal legislation that has been assented to. Then, within two years after the receipt of this copy, the (British) Monarch in Council could disallow an Act. Thirdly, at least four pieces of Imperial legislation constrained the Canadian legislatures. The Colonial Laws Validity Act of 1865 provided that no colonial law could validly conflict with, amend, or repeal Imperial legislation that either explicitly, or by necessary implication, applied directly to that colony. The Merchant Shipping Act of 1894, as well as the Colonial Courts of Admiralty Act of 1890 required reservation of Dominion legislation on those topics for approval by the British Government. Also, the Colonial Stock Act of 1900 provided for the disallowance of any Dominion legislation the British government felt would harm British stockholders of Dominion trustee securities. Most importantly, however, the British Parliament could exercise the legal right of supremacy that it possessed over common law to pass any legislation on any matter affecting the colonies. For decades, none of the Dominions was allowed to have its own embassies or consulates in foreign countries. All matters concerning international travel, commerce, etc., had to be transacted through British embassies and consulates. For example, all transactions concerning visas and lost or stolen passports by citizens of the Dominions were carried out at British diplomatic offices. It was not until the late 1930s and early 1940s that the Dominion governments were allowed to establish their own embassies, and the first two of these that were established by the Dominion governments in Ottawa and in Canberra were both established in Washington, D.C., in the United States. As Heard later explained, the British government seldom invoked its powers over Canadian legislation. British legislative powers over Canadian domestic policy were largely theoretical and their exercise was increasingly unacceptable in the 1870s and 1880s. The rise to the status of a Dominion and then full independence for Canada and other possessions of the British Empire did not occur by the granting of titles or similar recognition by the British Parliament but by initiatives taken by the new governments of certain former British dependencies to assert their independence and to establish constitutional precedents. What is remarkable about this whole process is that it was achieved with a minimum of legislative amendments. Much of Canada's independence arose from the development of new political arrangements, many of which have been absorbed into judicial decisions interpreting the constitution—with or without explicit recognition. Canada's passage from being an integral part of the British Empire to being an independent member of the Commonwealth richly illustrates the way in which fundamental constitutional rules have evolved through the interaction of constitutional convention, international law, and municipal statute and case law. What was significant about the creation of the Canadian and Australian federations was not that they were instantly granted wide new powers by the Imperial centre at the time of their creation; but that they, because of their greater size and prestige, were better able to exercise their existing powers and lobby for new ones than the various colonies they incorporated could have done separately. They provided a new model which politicians in New Zealand, Newfoundland, South Africa, Ireland, India, Malaysia could point to for their own relationship with Britain. Ultimately, "[Canada's] example of a peaceful accession to independence with a Westminster system of government came to be followed by 50 countries with a combined population of more than 2-billion people." Colonial Conference of 1907 Issues of colonial self-government spilled into foreign affairs with the Boer War (1899–1902). The self-governing colonies contributed significantly to British efforts to stem the insurrection, but ensured that they set the conditions for participation in these wars. Colonial governments repeatedly acted to ensure that they determined the extent of their peoples' participation in imperial wars in the military build-up to the First World War. The assertiveness of the self-governing colonies was recognised in the Colonial Conference of 1907, which implicitly introduced the idea of the Dominion as a self-governing colony by referring to Canada and Australia as Dominions. It also retired the name "Colonial Conference" and mandated that meetings take place regularly to consult Dominions in running the foreign affairs of the empire. The Colony of New Zealand, which chose not to take part in Australian federation, became the Dominion of New Zealand on 26 September 1907; Newfoundland became a Dominion on the same day. The Union of South Africa was referred to as a Dominion upon its creation in 1910. First World War and Treaty of Versailles The initiatives and contributions of British colonies to the British war effort in the First World War were recognised by Britain with the creation of the Imperial War Cabinet in 1917, which gave them a say in the running of the war. Dominion status as self-governing states, as opposed to symbolic titles granted various British colonies, waited until 1919, when the self-governing Dominions signed the Treaty of Versailles independently of the British government and became individual members of the League of Nations. This ended the purely colonial status of the dominions. The First World War ended the purely colonial period in the history of the Dominions. Their military contribution to the Allied war effort gave them claim to equal recognition with other small states and a voice in the formation of policy. This claim was recognised within the Empire by the creation of the Imperial War Cabinet in 1917, and within the community of nations by Dominion signatures to the Treaty of Versailles and by separate Dominion representation in the League of Nations. In this way the "self-governing Dominions", as they were called, emerged as junior members of the international community. Their status defied exact analysis by both international and constitutional lawyers, but it was clear that they were no longer regarded simply as colonies of Britain. Irish Free State The Irish Free State, set up in 1922 after the Anglo-Irish War, was the first Dominion to appoint a non-British, non-aristocratic Governor-General when Timothy Michael Healy took the position in 1922. Dominion status was never popular in the Irish Free State where people saw it as a face-saving measure for a British government unable to countenance a republic in what had previously been the United Kingdom of Great Britain and Ireland. Successive Irish governments undermined the constitutional links with Britain until they were severed completely in 1949. In 1937 Ireland adopted, almost simultaneously, both a new constitution that included powers for a president of Ireland and a law confirming the king's role of head of state in external relations. Second Balfour Declaration and Statute of Westminster The Balfour Declaration of 1926, and the subsequent Statute of Westminster, 1931, restricted Britain's ability to pass or affect laws outside of its own jurisdiction. Significantly, Britain initiated the change to complete sovereignty for the Dominions. The First World War left Britain saddled with enormous debts, and the Great Depression had further reduced Britain's ability to pay for defence of its empire. In spite of popular opinions of empires, the larger Dominions were reluctant to leave the protection of the then-superpower. For example, many Canadians felt that being part of the British Empire was the only thing that had prevented them from being absorbed into the United States. Until 1931, Newfoundland was referred to as a colony of the United Kingdom, as for example, in the 1927 reference to the Judicial Committee of the Privy Council to delineate the Quebec-Labrador boundary. Full autonomy was granted by the United Kingdom parliament with the Statute of Westminster in December 1931. However, the government of Newfoundland "requested the United Kingdom not to have sections 2 to 6[—]confirming Dominion status[—]apply automatically to it[,] until the Newfoundland Legislature first approved the Statute, approval which the Legislature subsequently never gave". In any event, Newfoundland's letters patent of 1934 suspended self-government and instituted a "Commission of Government", which continued until Newfoundland became a province of Canada in 1949. It is the view of some constitutional lawyers that—although Newfoundland chose not to exercise all of the functions of a Dominion like Canada—its status as a Dominion was "suspended" in 1934, rather than "revoked" or "abolished". Canada, Australia, New Zealand, the Irish Free State, Newfoundland and South Africa (prior to becoming a republic and leaving the Commonwealth in 1961), with their large populations of European descent, were sometimes collectively referred to as the "White Dominions". Today Canada, Australia, New Zealand and the United Kingdom are sometimes referred to collectively as the "White Commonwealth". List of Dominions |Country[‡ 1]||From||To[‡ 2]||Status| Continues as a Commonwealth realm and member of the Commonwealth of Nations. 'Dominion' was conferred as the country's title in the 1867 constitution and retained with the constitution's patriation in 1982, but has fallen into disuse. Continues as a Commonwealth realm and member of the Commonwealth of Nations Continues as a Commonwealth realm and member of the Commonwealth of Nations |Newfoundland||1907||1949||After governance had reverted to direct control from London in 1934, became a province of Canada under the British North America Act, 1949 (now the Newfoundland Act) passed in the U.K. parliament, 31 March 1949, prior to the London Declaration of 28 April 1949| |South Africa||1910||1953||Continued as a Commonwealth realm until it became a republic in 1961 under the Republic of South Africa Constitution Act, 1961, passed by the Parliament of South Africa, long title "To constitute the Republic of South Africa and to provide for matters incidental thereto", assented to 24 April 1961 to come into operation on 31 May 1961.| | Irish Free State (1922–37) Éire (1937–49) [‡ 3] |1922||1949||The link with the monarchy ceased with the passage of the Republic of Ireland Act 1948, which came into force on 18 April 1949 and declared that the state was a republic.| |India||1947||1950||The Union of India (with the addition of Sikkim) became a federal republic after its constitution came into effect on 26 January 1950.| |Pakistan||1947||1953||Continued as a Commonwealth realm until 1956 when it became a republic under the name "The Islamic Republic of Pakistan": Constitution of 1956| |Ceylon||1948||1953||Continued as a Commonwealth realm until 1972 when it became a republic under the name of Sri Lanka| Four colonies of Australia had enjoyed responsible government since 1856: New South Wales, Victoria, Tasmania and South Australia. Queensland had responsible government soon after its founding in 1859 but, because of ongoing financial dependence on Britain, Western Australia became the last Australian colony to attain self-government in 1890. During the 1890s, the colonies voted to unite and in 1901 they were federated under the British Crown as the Commonwealth of Australia by the Commonwealth of Australia Constitution Act. The Constitution of Australia had been drafted in Australia and approved by popular consent. Thus Australia is one of the few countries established by a popular vote. Under the second Balfour Declaration, the federal government was regarded as coequal with (and not subordinate to) the British and other Dominion governments, and this was given formal legal recognition in 1942 (when the Statute of Westminster was retroactively adopted to the commencement of the Second World War 1939). In 1930, the Australian prime minister, James Scullin, reinforced the right of the overseas Dominions to appoint native-born governors-general, when he advised King George V to appoint Sir Isaac Isaacs as his representative in Australia, against the wishes of the opposition and officials in London. The governments of the States (called colonies before 1901) remained under the Commonwealth but retained links to the UK until the passage of the Australia Act 1986. The term Dominion is employed in the Constitution Act, 1867 (originally the British North America Act, 1867), and describes the resulting political union. Specifically, the preamble of the act states: "Whereas the Provinces of Canada, Nova Scotia, and New Brunswick have expressed their Desire to be federally united into One Dominion under the Crown of the United Kingdom of Great Britain and Ireland, with a Constitution similar in Principle to that of the United Kingdom..." Furthermore, Sections 3 and 4 indicate that the provinces "shall form and be One Dominion under the Name of Canada; and on and after that Day those Three Provinces shall form and be One Dominion under that Name accordingly". Usage of the phrase Dominion of Canada was employed as the country's name after 1867, predating the general use of the term dominion as applied to the other autonomous regions of the British Empire after 1907. The phrase Dominion of Canada does not appear in the 1867 act nor in the Constitution Act, 1982, but does appear in the Constitution Act, 1871, other contemporaneous texts, and subsequent bills. References to the Dominion of Canada in later acts, such as the Statute of Westminster, do not clarify the point because all nouns were formally capitalised in British legislative style. Indeed, in the original text of the Constitution Act, 1867, "One" and "Name" were also capitalised. Frank Scott theorised that Canada's status as a Dominion ended with the Canadian parliament's declaration of war on Germany on 9 September 1939. From the 1950s, the federal government began to phase out the use of Dominion, which had been used largely as a synonym of "federal" or "national" such as "Dominion building" for a post office, "Dominion-provincial relations", and so on. The last major change was renaming the national holiday from Dominion Day to Canada Day in 1982. Official bilingualism laws also contributed to the disuse of dominion, as it has no acceptable equivalent in French. While the term may be found in older official documents, and the Dominion Carillonneur still tolls at Parliament Hill, it is now hardly used to distinguish the federal government from the provinces or (historically) Canada before and after 1867. Nonetheless, the federal government continues to produce publications and educational materials that specify the currency of these official titles. Defenders of the title Dominion—including monarchists who see signs of creeping republicanism in Canada—take comfort in the fact that the Constitution Act, 1982 does not mention and therefore does not remove the title, and that a constitutional amendment is required to change it. The word Dominion has been used with other agencies, laws, and roles: - Dominion Carillonneur: official responsible for playing the carillons at the Peace Tower since 1916 - Dominion Day (1867–1982): holiday marking Canada's national day; now called Canada Day - Dominion Observatory (1905–1970): weather observatory in Ottawa; now used as Office of Energy Efficiency, Energy Branch, Natural Resources Canada - Dominion Lands Act (1872): federal lands act; repealed in 1918 - Dominion Bureau of Statistics (1918–1971): superseded by Statistics Canada - Dominion Police (1867–1920): merged to form the Royal Canadian Mounted Police (RCMP) - Dominion Astrophysical Observatory (1918–present); now part of the National Research Council Herzberg Institute of Astrophysics - Dominion Radio Astrophysical Observatory (1960–present); now part of the National Research Council Herzberg Institute of Astrophysics - Dominion of Canada Rifle Association founded in 1868 and incorporated by an Act of Parliament in 1890 Toronto-Dominion Bank (founded as the Dominion Bank in 1871 and later merged with the Bank of Toronto), the Dominion of Canada General Insurance Company (founded in 1887), the Dominion Institute (created in 1997), and Dominion (founded in 1927, renamed as Metro stores beginning in August 2008) are notable Canadian corporations not affiliated with government that have used Dominion as a part of their corporate name. Ceylon, which, as a crown colony, was originally promised "fully responsible status within the British Commonwealth of Nations", was formally granted independence as a Dominion in 1948. In 1972 it adopted a republican constitution to become the Free, Sovereign and Independent Republic of Sri Lanka. By a new constitution in 1978, it became the Democratic Socialist Republic of Sri Lanka. India and Pakistan India officially acquired responsible government in 1909, though the first Parliament did not meet until 1919. In the 1930s the idea of making British India as it was then into a dominion - the first one with a non-European population - was seriously discussed, but ran into serious snags - notably the increasing tensions between Hindus and Muslims. India and Pakistan finally separated as independent dominions in 1947. In the changed post-Second World War conditions this proved a transitory stage, and India became a republic in 1950 and Pakistan adopted a republican form of government in 1956. Irish Free State / Ireland The Irish Free State (Ireland from 1937) was a British Dominion between 1922 and 1949. As established by the Irish Free State Constitution Act of the United Kingdom Parliament on 6 December 1922 the new state—which had dominion status in the likeness of that enjoyed by Canada within the British Commonwealth of Nations—comprised the whole of Ireland. However, provision was made in the Act for the Parliament of Northern Ireland to opt out of inclusion in the Irish Free State, which—as had been widely expected at the time—it duly did one day after the creation of the new state, on 7 December 1922. Following a plebiscite of the people of the Free State held on 1 July 1937, a new constitution came into force on 29 December of that year, establishing a successor state with the name of "Ireland" which ceased to participate in Commonwealth conferences and events. Nevertheless, the United Kingdom and other member states of the Commonwealth continued to regard Ireland as a dominion owing to the unusual role accorded to the British Monarch under the Irish External Relations Act of 1936. Ultimately, however, Ireland's Oireachtas passed the Republic of Ireland Act 1948, which came into force on 18 April 1949 and unequivocally ended Ireland's links with the British Monarch and the Commonwealth. The colony of Newfoundland enjoyed responsible government from 1855 to 1934. It was among the colonies declared dominions in 1907. Following the recommendations of a Royal Commission, parliamentary government was suspended in 1934 due to severe financial difficulties resulting from the depression and a series of riots against the dominion government in 1932. In 1949, it joined Canada and the legislature was restored. The New Zealand Constitution Act 1852 gave New Zealand its own Parliament (General Assembly) and home rule in 1852. In 1907 New Zealand was proclaimed the Dominion of New Zealand. New Zealand, Canada, and Newfoundland used the word dominion in the official title of the nation, whereas Australia used Commonwealth of Australia and South Africa Union of South Africa. New Zealand adopted the Statute of Westminster in 1947 and in the same year legislation passed in London gave New Zealand full powers to amend its own constitution. In 1986, the New Zealand parliament passed the Constitution Act 1986, which repealed the Constitution Act of 1852 and the last constitutional links with the United Kingdom. The Union of South Africa was formed in 1910 from the four self-governing colonies of the Cape Colony, Natal, the Transvaal, and the Orange Free State (the last two were former Boer republics). The South Africa Act 1909 provided for a Parliament consisting of a Senate and a House of Assembly. The provinces had their own legislatures. In 1961, the Union of South Africa adopted a new constitution, became a republic, left the Commonwealth (and re-joined following end of Apartheid rule in the 1990s), and became the present-day Republic of South Africa. Southern Rhodesia (renamed Zimbabwe in 1980) was a special case in the British Empire. Although it was never a dominion, it was treated as a dominion in many respects. Southern Rhodesia was formed in 1923 out of territories of the British South Africa Company and established as a self-governing colony with substantial autonomy on the model of the dominions. The imperial authorities in London retained direct powers over foreign affairs, constitutional alterations, native administration and bills regarding mining revenues, railways and the governor's salary. Southern Rhodesia was not one of the territories that were mentioned in the 1931 Statute of Westminster although relations with Southern Rhodesia were administered in London through the Dominion Office, not the Colonial Office. When the dominions were first treated as foreign countries by London for the purposes of diplomatic immunity in 1952, Southern Rhodesia was included in the list of territories concerned. This semi-dominion status continued in Southern Rhodesia between 1953 and 1963, when it joined Northern Rhodesia and Nyasaland in the Central African Federation, with the latter two territories continuing to be British protectorates. When Northern Rhodesia was given independence in 1964 it adopted the new name of Zambia, prompting Southern Rhodesia to shorten its name to Rhodesia, but Britain did not recognise this latter change. Rhodesia unilaterally declared independence from Britain in 1965 as a result of the British government's insistence on majority rule as a condition for independence. London regarded this declaration as illegal, and applied sanctions and expelled Rhodesia from the sterling area. Rhodesia continued with its dominion-style constitution until 1970, and continued to issue British passports to its citizens. The Rhodesian government continued to profess its loyalty to the Sovereign, despite being in a state of rebellion against Her Majesty's Government in London, until 1970, when it adopted a republican constitution following a referendum the previous year. This endured until the state's reconstitution as Zimbabwe Rhodesia in 1979 under the terms of the Internal Settlement; this lasted until the Lancaster House Agreement of December 1979, which put it under interim British rule while fresh elections were held. The country achieved independence deemed legal by the international community in April 1980, when Britain granted independence under the name Zimbabwe. |This section does not cite any sources. (February 2012)| Initially, the Foreign Office of the United Kingdom conducted the foreign relations of the Dominions. A Dominions section was created within the Colonial Office for this purpose in 1907. Canada set up its own Department of External Affairs in June 1909, but diplomatic relations with other governments continued to operate through the governors-general, Dominion High Commissioners in London (first appointed by Canada in 1880; Australia followed only in 1910), and British legations abroad. Britain deemed her declaration of war against Germany in August 1914 to extend to all territories of the Empire without the need for consultation, occasioning some displeasure in Canadian official circles and contributing to a brief anti-British insurrection by Afrikaner militants in South Africa later that year. A Canadian War Mission in Washington, D.C., dealt with supply matters from February 1918 to March 1921. Although the Dominions had had no formal voice in declaring war, each became a separate signatory of the June 1919 peace Treaty of Versailles, which had been negotiated by a British-led united Empire delegation. In September 1922, Dominion reluctance to support British military action against Turkey influenced Britain's decision to seek a compromise settlement. Diplomatic autonomy soon followed, with the U.S.-Canadian Halibut Treaty (March 1923) marking the first time an international agreement had been entirely negotiated and concluded independently by a Dominion. The Dominions Section of the Colonial Office was upgraded in June 1926 to a separate Dominions Office; however, initially, this office was held by the same person that held the office of Secretary of State for the Colonies. The principle of Dominion equality with Britain and independence in foreign relations was formally recognised by the Balfour Declaration, adopted at the Imperial Conference of November 1926. Canada's first permanent diplomatic mission to a foreign country opened in Washington, D.C., in 1927. In 1928, Canada obtained the appointment of a British high commissioner in Ottawa, separating the administrative and diplomatic functions of the governor-general and ending the latter's anomalous role as the representative of the British government in relations between the two countries. The Dominions Office was given a separate secretary of state in June 1930, though this was entirely for domestic political reasons given the need to relieve the burden on one ill minister whilst moving another away from unemployment policy. The Balfour Declaration was enshrined in the Statute of Westminster 1931 when it was adopted by the British Parliament and subsequently ratified by the Dominion legislatures. Britain's declaration of hostilities against Nazi Germany on 3 September 1939 tested the issue. Most took the view that the declaration did not commit the Dominions. Ireland chose to remain neutral. At the other extreme, the conservative Australian government of the day, led by Robert Menzies, took the view that, since Australia had not adopted the Statute of Westminster, it was legally bound by the UK declaration of war—which had also been the view at the outbreak of the First World War—though this was contentious within Australia. Between these two extremes, New Zealand declared that as Britain was or would be at war, so it was too. This was, however, a matter of political choice rather than legal necessity. Canada issued its own declaration of war after a recall of Parliament, as did South Africa after a delay of several days (South Africa on September 6, Canada on September 10). Ireland, which had negotiated the removal of British forces from its territory the year before, remained neutral. There were soon signs of growing independence from the other Dominions: Australia opened a diplomatic mission in the US in 1940, as did New Zealand in 1941, and Canada's mission in Washington gained embassy status in 1943. From Dominions to Commonwealth realms |This section does not cite any sources. (January 2011)| Initially, the Dominions conducted their own trade policy, some limited foreign relations and had autonomous armed forces, although the British government claimed and exercised the exclusive power to declare wars. However, after the passage of the Statute of Westminster the language of dependency on the Crown of the United Kingdom ceased, where the Crown itself was no longer referred to as the Crown of any place in particular but simply as "the Crown". Arthur Berriedale Keith, in Speeches and Documents on the British Dominions 1918–1931, stated that "the Dominions are sovereign international States in the sense that the King in respect of each of His Dominions (Newfoundland excepted) is such a State in the eyes of international law". After then, those countries that were previously referred to as "Dominions" became Commonwealth realms where the sovereign reigns no longer as the British monarch, but as monarch of each nation in its own right, and are considered equal to the UK and one another. The Second World War, which fatally undermined Britain's already weakened commercial and financial leadership, further loosened the political ties between Britain and the Dominions. Australian Prime Minister John Curtin's unprecedented action (February 1942) in successfully countermanding an order from British Prime Minister Winston Churchill that Australian troops be diverted to defend British-held Burma (the 7th Division was then en route from the Middle East to Australia to defend against an expected Japanese invasion) demonstrated that Dominion governments might no longer subordinate their own national interests to British strategic perspectives. To ensure that Australia had full legal power to act independently, particularly in relation to foreign affairs, defence industry and military operations, and to validate its past independent action in these areas, Australia formally adopted the Statute of Westminster in October 1942 and backdated the adoption to the start of the war in September 1939. The Dominions Office merged with the India Office as the Commonwealth Relations Office upon the independence of India and Pakistan in August 1947. The last country officially made a Dominion was Ceylon in 1948. The term "Dominion" fell out of general use thereafter. Ireland ceased to be a member of the Commonwealth on 18 April 1949, upon the coming into force of the Republic of Ireland Act 1948. This formally signalled the end of the former dependencies' common constitutional connection to the British crown. India also adopted a republican constitution in January 1950. Unlike many dependencies that became republics, Ireland never re-joined the Commonwealth, which agreed to accept the British monarch as head of that association of independent states. The independence of the separate realms was emphasised after the accession of Queen Elizabeth II in 1952, when she was proclaimed not just as Queen of the United Kingdom, but also Queen of Canada, Queen of Australia, Queen of New Zealand, and of all her other "realms and territories" etc. This also reflected the change from Dominion to realm; in the proclamation of Queen Elizabeth II's new titles in 1953, the phrase "of her other Realms and Territories" replaced "Dominion" with another mediaeval French word with the same connotation, "realm" (from royaume). Thus, recently, when referring to one of those sixteen countries within the Commonwealth of Nations that share the same monarch, the phrase Commonwealth realm has come into common usage instead of Dominion to differentiate the Commonwealth nations that continue to share the monarch as head of state (Australia, Canada, New Zealand, Jamaica, etc.) from those that do not (India, Pakistan, South Africa, etc.). The term "Dominion" is still found in the Canadian constitution where it appears numerous times, but it is largely a vestige of the past, as the Canadian government does not actively use it (see Canada section). The term "realm" does not appear in the Canadian constitution. The generic language of dominion did not cease in relation to the Sovereign. It was, and is, used to describe territories in which the monarch exercises sovereignty. It also describes a model of governance in newly independent British colonies, featuring a Westminster parliamentary government and the British monarch as head of state: After World War II, Britain attempted to repeat the dominion model in decolonizing the Caribbean. ... Though several colonies, such as Guyana and Trinidad and Tobago, maintained their formal allegiance to the British monarch, they soon revised their status to become republics. Britain also attempted to establish a dominion model in decolonizing Africa, but it, too, was unsuccessful. ... Ghana, the first former colony declared a dominion in 1957, soon demanded recognition as a republic. Other African nations followed a similar pattern throughout the 1960s: Nigeria, Tanganyika, Uganda, Kenya, and Malawi. In fact, only Gambia, Sierra Leone, and Mauritius retained their dominion status for more than three years. The phrase His/Her Majesty's dominions is a legal and constitutional phrase that refers to all the realms and territories of the Sovereign, whether independent or not. Thus, for example, the British Ireland Act, 1949, recognised that the Republic of Ireland had "ceased to be part of His Majesty’s dominions". When dependent territories that had never been annexed (that is, were not colonies of the Crown), but were mandates, protectorates or trust territories (of the United Nations or the former League of Nations) were granted independence, the United Kingdom act granting independence always declared that such and such a territory "shall form part of Her Majesty's dominions", and so become part of the territory in which the Queen exercises sovereignty, not merely suzerainty. Many distinctive characteristics that once pertained only to Dominions are now shared by other states in the Commonwealth, whether republics, independent realms, associated states or territories. The practice of appointing a High Commissioner instead of a diplomatic representative such as an ambassador for communication between the government of a dominion and the British government in London continues in respect of Commonwealth realms and republics as sovereign states. - British Empire - Changes in British sovereignty - Commonwealth of Nations - Crown colony - High Commissioner (Commonwealth) - Name of Canada - Self-governing colony - United Kingdom - Merriam Webster's Dictionary (based on Collegiate vol., 11th ed.) 2006. Springfield, MA: Merriam-Webster, Inc. - Hillmer, Norman (2001). "Commonwealth". Toronto: Canadian Encyclopedia. ...the Dominions (a term applied to Canada in 1867 and used from 1907 to 1948 to describe the empire's other self-governing members) - Cyprus (Annexation) Order in Council, 1914, dated 5 November 1914. - Order quoted in The American Journal of International Law, "Annexation of Cyprus by Great Britain" - "Parliamentary questions, Hansard, 5 November 1934". hansard.millbanksystems.com. 1934-11-05. Retrieved 2010-06-11. - Roberts, J. M., The Penguin History of the World (London: Penguin Books, 1995, ISBN 0-14-015495-7), p. 777 - League of Nations (1924). "The Covenant of the League of Nations". Article 1: The Avalon Project at Yale Law School. Retrieved 2009-04-20. - James Crawford, The Creation of States in International Law (Oxford: Oxford University Press, 1979, ISBN 978-0-19-922842-3), p. 243 - "Dominion". Youth Encyclopedia of Canada (based on Canadian Encyclopedia). Historica Foundation of Canada, 2008. Accessed 2008-06-20. "The word "Dominion" is the official status of Canada. ... The term is little used today." - National Health Service Act 2006 (c. 41), sch. 22 - Link to the Australian Constitutions Act 1850 on the website of the National Archives of Australia: www.foundingdocs.gov.au - Link to the New South Wales Constitution Act 1855, on the Web site of the National Archives of Australia: www.foundingdocs.gov.au - Link to the Victoria Constitution Act 1855, on the Web site of the National Archives of Australia: www.foundingdocs.gov.au - Link to the Constitution Act 1855 (SA), on the Web site of the National Archives of Australia: www.foundingdocs.gov.au - Link to the Constitution Act 185 (Tasmania), on the Web site of the National Archives of Australia: www.foundingdocs.gov.au - Link to the Order in Council of 6 June 1859, which established the Colony of Queensland, on the Web site of the National Archives of Australia. - The "Northern Territory of New South Wales" was physically separated from the main part of NSW. In 1863, the bulk of it was transferred to South Australia, except for a small area that became part of Queensland. See: Letters Patent annexing the Northern Territory to South Australia, 1863. In 1911, the Commonwealth of Australia agreed to assume responsibility for administration of the Northern Territory, which was regarded by the government of South Australia as a financial burden.www.foundingdocs.gov.au. The NT did not receive responsible government until 1978. - Link to the Constitution Act 1890, which established self-government in Western Australia: www.foundingdocs.gov.au - Alan Rayburn (2001). Naming Canada: Stories about Canadian Place Names. University of Toronto Press. pp. 17–21. ISBN 978-0-8020-8293-0. - "The London Conference December 1866 – March 1867". www.collectionscanada.gc.ca. Retrieved 2010-06-11. - Andrew Heard (2008-02-05). "Canadian Independence". Check date values in: |year= / |date= mismatch(help) - Eugene Forsey (2007-10-14). "How Canadians Govern Themselves". Check date values in: |year= / |date= mismatch(help) - Buckley, F. H., The Once and Future King: The Rise of Crown Government in America (Encounter Books, 2014), excerpt: http://fullcomment.nationalpost.com/2014/05/15/f-h-buckley-how-canadas-creation-changed-the-world/. - F. R. Scott (January 1944). "The End of Dominion Status". The American Journal of International Law (American Society of International Law) 38 (1): 34–49. doi:10.2307/2192530. JSTOR 2192530. - Europe Since 1914: Encyclopedia of the Age of War and Reconstruction; John Merriman and Jay Winter; 2006; see the British Empire entry which lists the "White Dominions" above except Newfoundland - J. E. Hodgetts. 2004. "Dominion". Oxford Companion to Canadian History, Gerald Hallowell, ed. (ISBN 0-19-541559-0), at http://www.oxfordreference.com/view/10.1093/acref/9780195415599.001.0001/acref-9780195415599-e-471 - p. 183: "... Ironically, defenders of the title dominion who see signs of creeping republicanism in such changes can take comfort in the knowledge that the Constitution Act, 1982, retains the title and requires a constitutional amendment to alter it." - Forsey, Eugene A., in Marsh, James H., ed. 1988. "Dominion" The Canadian Encyclopedia. Hurtig Publishers: Toronto. - "National Flag of Canada Day: How Did You Do?". Department of Canadian Heritage. Retrieved 2008-02-07. The issue of our country's legal title was one of the few points on which our constitution is not entirely homemade. The Fathers of Confederation wanted to call the country "the Kingdom of Canada". However the British government was afraid of offending the Americans so it insisted on the Fathers finding another title. The term "Dominion" was drawn from Psalm 72. In the realms of political terminology, the term dominion can be directly attributed to the Fathers of Confederation and it is one of the very few, distinctively Canadian contributions in this area. It remains our country's official title. - s:Republic of South Africa Constitution Act, 1961 - B. Hunter (ed), The Stateman's Year Book 1996-1997, Macmillan Press Ltd, pp. 130-156 - Order in Council of the UK Privy Council, 6 June 1859, establishing responsible government in Queensland. See Australian Government's "Documenting a Democracy" website at this webpage: www.foundingdocs.gov.au - Constitution Act 1890 (UK), which came into effect as the Constitution of Western Australia when proclaimed in WA on 21 October 1890, and establishing responsible government in WA from that date; Australian Government's "Documenting a Democracy" website: www.foundingdocs.gov.au - D. Smith, Head of State, MaCleay Press 2005, p. 18 - Scott, Frank R. (January 1944). "The End of Dominion Status". The American Journal of International Law (American Society of International Law) 38 (1): 34–49. doi:10.2307/2192530. - "The Prince of Wales 2001 Royal Visit: April 25 - April 30; Test Your Royal Skills". Department of Canadian Heritage. 2001. Retrieved 2008-02-07. As dictated by the British North America Act, 1867, the title is Dominion of Canada. The term is a uniquely Canadian one, implying independence and not colonial status, and was developed as a tribute to the Monarchical principle at the time of Confederation. - "How Canadians Govern Themselves" (PDF). PDF. Retrieved 2008-02-06. Forsey, Eugene (2005). How Canadians Govern Themselves (6th ed.). Ottawa: Her Majesty the Queen in Right of Canada. ISBN 0-662-39689-8. The two small points on which our constitution is not entirely homemade are, first, the legal title of our country, “Dominion,” and, second, the provisions for breaking a deadlock between the Senate and the House of Commons. - The Statesman's Year Book, p. 635 - Indian Independence Act 1947, "An Act to make provision for the setting up in India of two independent Dominions, to substitute other provisions for certain provisions of the Government of India Act 1935, which apply outside those Dominions, and to provide for other matters consequential on or connected with the setting up of those Dominions" passed by the U.K. parliament 18 July 1947. - The Statesman's Year Book, p. 1002 - On 7 December 1922 (the day after the establishment of the Irish Free State) the Parliament resolved to make the following address to the King so as to opt out of the Irish Free State: ”MOST GRACIOUS SOVEREIGN, We, your Majesty's most dutiful and loyal subjects, the Senators and Commons of Northern Ireland in Parliament assembled, having learnt of the passing of the Irish Free State Constitution Act, 1922, being the Act of Parliament for the ratification of the Articles of Agreement for a Treaty between Great Britain and Ireland, do, by this humble Address, pray your Majesty that the powers of the Parliament and Government of the Irish Free State shall no longer extend to Northern Ireland". Source: Northern Ireland Parliamentary Report, 7 December 1922 and Anglo-Irish Treaty, sections 11, 12. - The Statesman's Year Book, p. 302 - The Statesman's Year Book, p. 303 - The Statesman's Year Book - "History, Constitutional - The Legislative Authority of the New Zealand Parliament - 1966 Encyclopaedia of New Zealand". www.teara.govt.nz. 2009-04-22. Retrieved 2010-06-11. - "Dominion status". NZHistory. Retrieved 2010-06-11. - Prof. Dr. Axel Tschentscher, LL. M. "ICL - New Zealand - Constitution Act 1986". servat.unibe.ch. Retrieved 2010-06-11. - The Stateman’s Year Book p. 1156 - Wikisource: South Africa Act 1909 - Rowland, J. Reid. "Constitutional History of Rhodesia: An outline": 245–251. Appendix to Berlyn, Phillippa (April 1978). The Quiet Man: A Biography of the Hon. Ian Douglas Smith. Salisbury: M. O. Collins. pp. 240–256. OCLC 4282978. - Wood, J. R. T. (April 2008). A matter of weeks rather than months: The Impasse between Harold Wilson and Ian Smith: Sanctions, Aborted Settlements and War 1965–1969. Victoria, British Columbia: Trafford Publishing. p. 5. ISBN 978-1-42514-807-2. - Harris, P. B. (September 1969). "The Rhodesian Referendum: June 20th, 1969" (pdf). Parliamentary Affairs (Oxford University Press) 23: 72–80. Retrieved 2013-06-04. - Gowlland-Debbas, Vera (1990). Collective Responses to Illegal Acts in International Law: United Nations action in the question of Southern Rhodesia (First ed.). Leiden and New York: Martinus Nijhoff Publishers. p. 73. ISBN 0-7923-0811-5. - Statute of Westminster Adoption Act 1942 (Act no. 56 of 1942). The long title for the Act was "To remove Doubts as to the Validity of certain Commonwealth Legislation, to obviate Delays occurring in its Passage, and to effect certain related purposes, by adopting certain Sections of the Statute of Westminster, 1931, as from the Commencement of the War between His Majesty the King and Germany." Link: www.foundingdocs.gov.au. - Brandon Jernigan, "British Empire" in M. Juang & Noelle Morrissette, eds., Africa and the Americas: Culture, Politics, and History (ABC-CLIO, 2008) p. 204. - Buckley, F. H., The Once and Future King: The Rise of Crown Government in America, Encounter Books, 2014. - Choudry, Sujit. 2001 (?). "Constitution Acts" (based on looseleaf by Hogg, Peter W.). Constitutional Keywords. University of Alberta, Centre for Constitutional Studies: Edmonton. - Holland, R. F., Britain and the Commonwealth Alliance 1918-1939, MacMillan, 1981. - Forsey, Eugene A. 2005. How Canadians Govern Themselves, 6th ed. (ISBN 0-662-39689-8) Canada: Ottawa. - Hallowell, Gerald, ed. 2004. The Oxford Companion to Canadian History. Oxford University Press: Toronto; p. 183-4 (ISBN 0-19-541559-0). - Marsh, James H., ed. 1988. "Dominion" et al. The Canadian Encyclopedia. Hurtig Publishers: Toronto. - Martin, Robert. 1993 (?). 1993 Eugene Forsey Memorial Lecture: A Lament for British North America. The Machray Review. Prayer Book Society of Canada. A summative piece about nomenclature and pertinent history with abundant references. - Rayburn, Alan. 2001. Naming Canada: stories about Canadian place names, 2nd ed. (ISBN 0-8020-8293-9) University of Toronto Press: Toronto.
https://en.wikipedia.org/wiki/British_Dominions
4
corn laws, regulations restricting the export and import of grain, particularly in England. As early as 1361 export was forbidden in order to keep English grain cheap. Subsequent laws, numerous and complex, forbade export unless the domestic price was low and forbade import unless it was high. The purpose of the laws was to assure a stable and sufficient supply of grain from domestic sources, eliminating undue dependence on foreign supplies, yet allowing for imports in time of scarcity. The corn law of 1815 was designed to maintain high prices and prevent an agricultural depression after the Napoleonic Wars. Consumers and laborers objected, but it was the criticism of manufacturers that the laws hampered industrialization by subsidizing agriculture that proved most effective. Following a campaign by the Anti-Corn-Law League, the corn laws were repealed by the Conservative government of Sir Robert Peel in 1846, despite the opposition of many of his own party, led by Lord George Bentinck and Benjamin Disraeli. With the revival of protectionism in the 20th cent., new grain restriction laws have been passed, but they have not been as extensive as those of earlier times. See D. G. Barnes, A History of English Corn Laws from 1660 to 1846 (1930, repr. 1965); N. Longmate, The Breadstealers (1984). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
http://www.factmonster.com/encyclopedia/history/corn-laws.html
4.09375
Pollution affects species directly, leading to mortality (in 6% of globally threatened birds) or reduced reproductive success (in 3%), as well as indirectly, through the degradation of habitats (affecting 11%). Pollution associated with agriculture, forestry and industry is the most common threat, and has the greatest impact on marine and freshwater environments and the species that depend upon them. Pollutants from a range of sources are causing habitat degradation that indirectly affects 11% of all threatened birds; and pollution has direct impacts on 6% of threatened birds through mortality and a further 3% that experience reduced reproductive success (analysis of data held in BirdLife’s World Bird Database 2008). The number of species affected by pollution is low compared with other threats and, significantly, the problems of pollution are relatively easy to solve. The major pollutants are effluents: from agriculture, forestry, industry, oil spills and the over-application of herbicides and pesticides (see figure). Effluents cause the greatest damage to aquatic environments; both marine and freshwater. A total of 170 threatened species are affected by one or more pollutants. Of those, 97 (57%) are associated with marine or freshwater habitats (compared with 25% of all threatened birds). Other specific forms of pollution affect a smaller number of species; among them garbage, acid rain and pollution from artificial lights which impacts burrow nesting seabirds such as Newell’s Shearwater Puffinus newelli that return to their colonies after dark and become disorientated by artificial lights. BirdLife International (2008) Pollution from agriculture, forestry and industry has significant impacts on birds. Presented as part of the BirdLife State of the world's birds website. Available from: http://www.birdlife.org/datazone/sowb/casestudy/155. Checked: 14/02/2016 |Key message: Pollution has direct and indirect impacts on bird populations|
http://www.birdlife.org/datazone/sowb/casestudy/155
4.03125
Conjunctivitis, also known as "pink eye," is an inflammation of the conjunctiva of the eye. The conjunctiva is the membrane that lines the inside of the eye and also a thin membrane that covers the actual eye. There are many different causes of conjunctivitis. The following are the most common causes: - bacteria, including: - Staphylococcus aureus - Haemophilus influenza - Streptococcus pneumoniae - Neisseria gonorrhea - Chlamydia trachomatis - viruses, including: - herpes virus - chemicals (seen mostly in the newborn period after the use of medicine in the eye to prevent other problems) Conjunctivitis is usually divided into at least two categories, newborn conjunctivitis and childhood conjunctivitis, with different causes and treatments for each. - newborn conjunctivitis The following are the most common causes and treatment options of newborn conjunctivitis: - chemical conjunctivitis This is related to an irritation in the eye from the use of eye drops that are given to the newborn to help prevent a bacterial infection. Sometimes, the newborn reacts to the drops and may develop a chemical conjunctivitis. The eyes are usually mildly red and inflamed, starting a few hours after the drops have been placed in the eye, and lasts for only 24 to 36 hours. This type of conjunctivitis usually requires no treatment. - gonococcal conjunctivitis This is caused by a bacteria called Neisseria gonorrhea. The newborn obtains this type of conjunctivitis by the passage through the birth canal from an infected mother. This type of conjunctivitis may be prevented with the use of eye drops in newborns at birth. The newborn eyes usually are very red, with thick drainage and swelling of the eyelids. This type usually starts about 2 to 4 days after birth. Treatment for gonococcal conjunctivitis usually will include antibiotics through an intravenous (IV) catheter. - inclusion conjunctivitis This is caused by an infection with chlamydia trachomatis, obtained by passage through the birth canal from an infected mother. The symptoms include moderate thick drainage from the eyes, redness of the eyes, swelling of the conjunctiva, and some swelling of the eyelids. This type of conjunctivitis usually starts 5 to 12 days after birth. Treatment usually will include oral antibiotics. - other bacterial causes After the first week of life, other bacteria may be the cause of conjunctivitis in the newborn. The eyes may be red and swollen with some drainage. Treatment depends on the type of bacteria that has caused the infection. Treatment usually will include antibiotic drops or ointments to the eye, warm compresses to the eye, and proper hygiene when touching the infected eyes. - childhood conjunctivitis Childhood conjunctivitis is a swelling of the conjunctiva and may also include an infection. It is a very common problem in children. Also, large outbreaks of conjunctivitis are often seen in daycare settings or schools. The following are the most common causes of childhood conjunctivitis: The following are the most common symptoms of childhood conjunctivitis. However, each child may experience symptoms differently. Symptoms may include: - itchy, irritated eyes - clear, thin drainage (usually seen with viral or allergic causes) - sneezing and runny nose (usually see with allergic causes) - stringy discharge from the eyes (usually seen with allergic causes) - thick, green drainage (usually seen with bacterial causes) - ear infection (usually seen with bacterial causes) - lesion with a crusty appearance (usually seen with herpes infection) - eyes that are matted together in the morning - swelling of the eyelids - redness of the conjunctiva - discomfort when the child looks at a light - burning in the eyes The symptoms of conjunctivitis may resemble other medical conditions or problems. Always consult your child's physician for a diagnosis. Conjunctivitis is usually diagnosed based on a complete medical history and physical examination of your child's eye. Cultures of the eye drainage are usually not required, but may be done to help confirm the cause of the infection. Specific treatment for conjunctivitis will be determined by your physician based on: - your child's age, overall health, and medical history - extent of the condition - your child's tolerance for specific medications, procedures, or therapies - expectations for the course of the condition - your opinion or preference Specific treatment depends on the underlying cause of the conjunctivitis. - bacterial causes Your child's physician may order antibiotic drops to put in the eyes. - viral causes Viral conjunctivitis usually does not require treatment. Your child's physician may order antibiotic drops for the eyes to help decrease the chance of a secondary infection. - allergic causes Treatment for conjunctivitis caused by allergies usually will involve treating the allergies. Your child's physician may order oral medications or eye drops to help with the allergies. If your child has an infection of the eye caused by a herpes infection, your child's physician may refer you to an eye care specialist. Your child may be given both oral medications and eye drops. This is a more serious type of infection and may result in scarring of the eye and loss of vision. Infection can be spread from one eye to the other, or to other people, by touching the affected eye or drainage from the eye. Proper handwashing is very important. Drainage from the eye is contagious for 24 to 48 hours after beginning treatment. Click here to view the Online Resources of Pediatrics
https://www.ecommunity.com/health/index.aspx?pageid=P00998
4.0625
Students will learn about radioactive decay and how to write nuclear equations. This concept discusses the cause of radioactivity and the two basic types of radioactive decay. Study the basics of radioactive decay and the properties of atomic nuclei in Marie Curie's laboratory and classroom. Learn about how radiocarbon dating works and how anthropologists can use this method to figure out who long ago people lived. Goes over radioactivity and explains alpha, beta, and gamma radiation. Reviews radioactivity, its causes, and its effect on the atom. A list of student-submitted discussion questions for Radiation. This is an activity for students to complete while reading the Radioactivity Concept This study guide gives a brief overview of radiation.
http://www.ck12.org/physics/Radiation/
4.25
Concave mirrors are used in a number of applications. They form upright, enlarged images, and are therefore useful in makeup application or shaving. They are also used in flashlights and headlights because they project parallel beams of light, and in telescopes because they focus light to produce greatly enlarged images. The photograph above shows the grinding of the primary mirror in the Hubble space telescope. The Hubble Space Telescope is a reflecting telescope with a mirror approximately eight feet in diameter, and was deployed from the Space Shuttle Discovery on April 25, 1990. Image in a Concave Mirror Reflecting surfaces do not have to be flat. The most common curved mirrors are spherical. A spherical mirror is called a concave mirror if the center of the mirror is further from the viewer than the edges are. A spherical mirror is called a convex mirror if the center of the mirror is closer to the viewer than the edges are. To see how a concave mirror forms an image, consider an object that is very far from the mirror so that the incoming rays are essentially parallel. For an object that is infinitely far away, the incoming rays would be exactly parallel. Each ray would strike the mirror and reflect according to the law of reflection (angle of reflection is equal to the angle of incidence). As long as the section of mirror is small compared to its radius of curvature, all the reflected rays will pass through a common point, called the focal point (f). If too large a piece of the mirror is used, the rays reflected from the top and bottom edges of the mirror will not pass through the focal point and the image will be blurry. This flaw is called spherical aberration and can be avoided either by using very small pieces of the spherical mirror or by using parabolic mirrors. A line drawn to the exact center of the mirror and perpendicular to the mirror at that point is called the principal axis. The distance along the principle axis from the mirror to the focal point is called the focal length. The focal length is also exactly one-half of the radius of curvature of the spherical mirror. That is, if the spherical mirror has a radius of 8 cm, then the focal length will be 4 cm. Objects Outside the Center Point Above is a spherical mirror with the principle axis, the focal point, and the center of curvature (C) identified on the image. An object has been placed well beyond C, and we will treat this object as if it were infinitely far away. There are two rays of light leaving any point on the object that can be traced without any drawing tools or measuring devices. The first is a ray that leaves the object and strikes the mirror parallel to the principle axis that will reflect through the focal point. The second is a ray that leaves the object and strikes the mirror by passing through the focal point; this ray will reflect parallel to the principle axis. These two rays can be seen in the image below. The two reflected rays intersect after reflection at a point between C and F. Since these two rays come from the tip of the object, they will form the tip of the image. If the image is actually drawn in at the intersection of the two rays, it will be smaller and inverted, as shown below. Rays from every point on the object could be drawn so that every point could be located to draw the image. The result would be the same as shown here. This is true for all concave mirrors with the object outside C: the image will be between C and F, and the image will be inverted and diminished (smaller than the object). The heights of the object and the image are related to their distances from the mirror. In fact, the ratio of their heights is the same ratio as their distances from the mirror. If is object distance, the image distance, the object height and the image height, then It can also be shown that and from this, we can derive the mirror equation, In this equation, f is the focal length do is the object distance, and di is the image distance. The magnification equation for a mirror is the image size divided by the object size, where m gives the magnification of the image. Example Problem: A 1.50 cm tall object is placed 20.0 cm from a concave mirror whose radius of curvature is 30.0 cm. What is the image distance and what is the image height? Solution: The focal length is one-half the center of curvature so it is 15.0 cm. multiply both sides by to get and . The image distance is 60.0 cm. The image is 3 times as far from the mirror as the object so it will be 3 times as large, or 4.5 cm tall. Objects Between the Center Point and the Focal Point Regardless of where the object is, its image's size and location can be determined using the equations given earlier in this section. Nonetheless, patterns emerge in these characteristics. We already know that the image of an object outside the center point is closer and smaller than the object. When an object is between the center point and the focal point, the image is larger and closer. These characteristics can also be determined by drawing the rays coming off of the object; this is called a ray diagram. Look again at the image above that was shown earlier in the lesson. If you consider the smaller arrow to be the object and follow the rays backward, the ray diagram makes it clear that if an object is located between the center point and focal point, the image is inverted, larger, and at a greater distance. Example Problem: If an 3.00 cm tall object is held 15.00 cm away from a concave mirror with a radius of 20.00 cm, describe its image. Solution: To solve this problem, we must determine the height of the image and the distance from the mirror to the image. To find the distance, use the mirror equation: Next determine the height of the image: Using this equation, we can determine that the height of the image is 6 cm. This image is a real image, which means that the rays of light are real rays. These are represented in the ray diagram as solid lines, while virtual rays are dotted lines. Objects Inside the Focal Point Consider what happens when the object for a concave mirror is placed between the focal point and the mirror. This situation is sketched at below. Once again, we can trace two rays to locate the image. A ray that originates at the focal point and passes through the tip of the object will reflect parallel to the principal axis. The second ray we trace is the ray that leaves the tip of the object and strikes the mirror parallel to the principal axis. Below is the ray diagram for this situation. The rays are reflected from the mirror and as they leave the mirror, they diverge. These two rays will never come back together and so a real image is not possible. When an observer looks into the mirror, however, the eye will trace the rays backward as if they had followed a straight line. The dotted lines in the sketch show the lines the rays would have followed behind the mirror. The eye will see an image behind the mirror just as if the rays of light had originated behind the mirror. The image seen will be enlarged and upright. Since the light does not actually pass through this image position, the image is virtual. Example Problem: A 1.00 cm tall object is placed 10.0 cm from a concave mirror whose radius of curvature is 30.0 cm. Determine the image distance and image size. Solution: Since the radius of curvature is 30.0 cm, the focal length is 15.0 cm. The object distance of 10.0 cm tells us that the object is between the focal point and the mirror. and plugging known values yields Multiplying both sides of the equation by yields and . The negative image distance indicates that the image is behind the mirror. We know the image is virtual because it is behind the mirror. Since the image is 3 times as far from the mirror as the object, it will be 3 times as tall. Therefore, the image height is 3.00 cm. - A spherical mirror is concave if the center of the mirror is further from the viewer than the edges. - For an object that is infinitely far away, incoming rays would be exactly parallel. - As long as the section of mirror is small compared to its radius of curvature, all the reflected rays will pass through a common point, called the focal point. - The distance along the principle axis from the mirror to the focal point is called the focal length, f. This is also exactly one-half of the radius of curvature. - The mirror equation is . - The magnification equation is . - For concave mirrors, when the object is outside C, the image will be between C and F and the image will be inverted and diminished (smaller than the object). - For concave mirrors, when the object is between C and F, the image will be beyond C and will be enlarged and inverted. - For concave mirrors, when the object is between F and the mirror, the image will be behind the mirror and will be enlarged and upright. Follow up questions: - What is the name of the line that goes through the center of a concave mirror? - What is the name of the point where the principal axis touches the mirror? - Light rays that approach the mirror parallel to the principal axis reflect through what point? - A concave mirror is designed so that a person 20.0 cm in front of the mirror sees an upright image magnified by a factor of two. What is the radius of curvature of this mirror? - If you have a concave mirror whose focal length is 100.0 cm, and you want an image that is upright and 10.0 times as tall as the object, where should you place the object? - A concave mirror has a radius of curvature of 20.0 cm. Locate the image for an object distance of 40.0 cm. Indicate whether the image is real or virtual, enlarged or diminished, and upright or inverted. - A dentist uses a concave mirror to examine a tooth that is 1.00 cm in front of the mirror. The image of the tooth forms 10.0 cm behind the mirror. - What is the mirror’s radius of curvature? - What is the magnification of the image? - When a man stands 1.52 m in front of a shaving mirror, the image produced is inverted and has an image distance of 18.0 cm. How close to the mirror must the man place his face if he wants an upright image with a magnification of 2.0?
http://www.ck12.org/physics/Concave-Mirrors/lesson/Images-in-a-Concave-Mirror/r14/
4.125
ping triangulationA process developed by IBM in which client requests over the Internet can be routed to the cell that is geographically closest. When one or more mirror sites exists, ping triangulation uses a process called echo location. When a server receives a client request, it sends out an ICMP echo, or ping, packet across the Internet to the mirror sites and times the echo response. From this information, the most appropriate site to respond to the client request is determined. Basically, ping triangulation maps in multidimensional space the location of every mirror site and the end-user, sending that user not only to an open server but to the closest open server. Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now. Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015 The most popular Webopedia definitions of 2015. Read More » This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More »
http://www.webopedia.com/TERM/P/ping_triangulation.html
4.0625
In the thick of whale season, researchers from Hawai'i Institute of Marine Biology (HIMB) and the National Oceanic and Atmospheric Administration (NOAA) shed new light on the wintering grounds of the humpback whale. The primary breeding ground for the North Pacific was always thought to be the main Hawaiian Islands (MHI). However, a new study has shown that these grounds extend all the way throughout the Hawaiian Archipelago and into the Northwestern Hawaiian Islands (NWHI), also known as Papahānaumokuākea Marine National Monument (PMNM). Humpback whales, an endangered species, were once on the brink of extinction due to commercial whaling practices of the last century. Today, thanks to international protection, their numbers have dramatically increased, resulting in a greater presence of these singing mammals during the winter months. Song is produced by male humpback whales during the winter breeding season. All males on a wintering ground sing roughly the same song any given year, but the song changes from year to year. No one is exactly sure why the whales sing but some researchers believe it could be a display to other males. Between 8,500 and 10,000 whales migrate to Hawai'i each winter; while the rest of the population can be found in places like Taiwan, the Philippines, the Mariana Islands, Baja California, Mexico, amongst other Pacific locations (Calambokidis et al. 2008). Over the past three decades, population recovery has resulted in a steady increase in the number of whales and a geographic expansion of their distribution in the MHI. Until recently, however, no empirical evidence existed that this expansion included the Northwestern Hawaiian Islands. This changed recently when scientists from HIMB and NOAA published their findings in the current issue of the journal Marine Ecology Progress Series, detailing the presence of humpback whale song in the Northwestern Hawaiian Archipelago. These researchers deployed instruments known as Ecological Acoustic Recorders (EARs) in both the NWHI and MHI to record the occurrence of humpback whale song, as an indicator of winter breeding activity. Humpback whale song was found to be prevalent throughout the NWHI and demonstrated trends very similar to those observed in the MHI. Dr. Marc Lammers, a researcher at HIMB and the lead scientist of the project explains "these findings are exciting because they force us to re-evaluate what we know about humpback whale migration and the importance of the NWHI to the population." The results are also of particular relevance in light of recent suggestions that an undocumented wintering area for humpback whales exists somewhere in the central North Pacific. Dr. Lammers and his colleagues believe that the NWHI could be that area. |Contact: Carlie Wiener| University of Hawaii at Manoa
http://bio-medicine.org/biology-news-1/Researchers-discover-new-wintering-grounds-for-humpback-whales-using-sound-18180-1/
4.28125
Parallelism Teacher Resources Find Parallelism educational ideas and activities Showing 1 - 20 of 4,205 resources Grammar Bytes - Parallel Structure The first exercise in a series of worksheets, this handout asks learners to read 10 sets of sentences and choose the one with no errors in structure. Tip: Find all of the worksheets on parallel structure throughout our website and create... 4th - 10th English Language Arts Grammar Bytes PowerPoint Presentation: Parallel Structure When preparing students for standardized tests, this presentation about parallel structure could be a great way to review. Using concrete examples, and providing detailed explanations, students could use this as an independent review. 9th - 12th English Language Arts Persuasion and Parallel Structure Discuss the definition of parallel structure with your high school class. In small groups, they read a section of "The Declaration of Independence" to identify examples of parallel structure. Each learner writes an essay explaining the... 9th - 12th English Language Arts Parallelism, Including Correlative Conjunctions and Comparisons After reading the first reference page about parallel structure using correlative conjunctions, young learners rewrite nine sentences with errors in parallelism. Even the strongest writers in your language arts class could benefit from... 7th - 10th English Language Arts Parallel Structure Practice Practice parallel structure with a multiple-choice exercise. Twenty questions challenge learners of all ages to correctly fill in blanks with phrases that are parallel in structure to what is already there. Tip: As noted, this worksheet... 5th - 12th English Language Arts Parallel Structure, Exercise 3 As the third worksheet in a series about parallel structure, this worksheet continues to challenge students' writing skills. It includes twenty multiple choice questions; students must select the correct phrase to complete each sentence... 4th - 12th English Language Arts Parallel Structure, Exercise 1 Challenge your pupils' writing skills with this two-page worksheet. There are a total of twenty sentences which must be read in order to determine whether or not they contain errors in parallel structure. Note: This worksheet accompanies... 4th - 11th English Language Arts Identifying Parallel Structure in Sentences Examine parallelism in sentence structure. Ninth graders review Lincoln's Gettysburg Address to find examples of parallelism, and look at the Declaration of Independence for the same. They compose an original piece of writing in which... 9th - 10th English Language Arts Test Review Sheet: Collection Four and Rhetoric Reinforce rhetorical reading with your eighth grader honors class (or standard-level high schoolers). Using quotes from American presidents and political leaders, pupils identify the rhetorical devices highlighted in each quote.... 8th English Language Arts Grammar Practice: Parallel Structure Help your young writers improve the clarity of their sentences by showing them how to create parallel structures as they construct sentences. Two exercises give kids practice identifying the correct parallel structure and crafting... 6th - 8th English Language Arts Test Review Sheet: Collection 8 & Rhetorical Devices Challenge your literary analysts with this test review sheet. Learners identify rhetorical devices and parallel structure in addition to defining literary devices and vocabulary. While there is no test included, this could be used as a... 8th - 9th English Language Arts
http://www.lessonplanet.com/lesson-plans/parallelism
4.28125
What if you knew that 25% of a number was equal to 24? How could you find that number? After completing this Concept, you'll be able to use the percent equation to solve problems like this one. The percent equation is often used to solve problems. It goes like this: Rate is the ratio that the percent represents ( in the second version). Total is often called the base unit. Part is the amount we are comparing with the base unit. Find 25% of $80. We are looking for the part. The total is $80. ‘of’ means multiply. is 25%, so we can use the second form of the equation: 25% of $80 is Part, or . , so the Part we are looking for is $20. Express $90 as a percentage of $160. This time we are looking for the rate. We are given the part ($90) and the total ($160). Using the rate equation, we get . Dividing both sides by 160 tells us that the rate is 0.5625, or 56.25%. $50 is 15% of what total sum? This time we are looking for the total. We are given the part ($50) and the rate (15%, or 0.15). Using the rate equation, we get . Dividing both sides by 0.15, we get . So $50 is 15% of $333.33. Watch this video for help with the Examples above. $96 is 12% of what total sum? This time we are looking for the total. We are given the part ($96) and the rate (12%, or 0.12). Using the rate equation, we get . Dividing both sides by 0.15, we get . So $96 is 12% of $800. Find the following. - 30% of 90 - 27% of 19 - 16.7% of 199 - 11.5% of 10.01 - 0.003% of 1,217.46 - 250% of 67 - 34.5% of y - 17.02% of y - x% of 280 - a% of 0.332 Answers for Explore More Problems To view the Explore More answers, open this PDF file and look for section 3.14. Texas Instruments Resources In the CK-12 Texas Instruments Algebra I FlexBook® resource, there are graphing calculator activities designed to supplement the objectives for some of the lessons in this chapter. See http://www.ck12.org/flexr/chapter/9613.
http://www.ck12.org/arithmetic/Percent-Equations/lesson/Percent-Equations---Intermediate/
4.09375
Video length: 8:04 min.Learn more about Teaching Climate Literacy and Energy Awareness» See how this Video supports the Next Generation Science Standards» Middle School: 6 Disciplinary Core Ideas High School: 5 Disciplinary Core Ideas About Teaching Climate Literacy Other materials addressing 5b Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - This could be used in any lesson on climate change, oceanography, marine science, ecology, and chemistry. - It is important that the students understand that ocean acidification is the other carbon dioxide problem. - Ocean acidification, while not directly impacting the climate system, is the result of the oceans soaking up much of the CO2 emitted into the atmosphere, resulting in a potentially catastrophic impact on the ocean ecosystem. About the Science - The video discusses two indicators of global change effects on the Southern Ocean: 1. Changes in Antarctic bottom water and 2: ocean acidification and its effect on pteropod shells. - Details how tiny plankton and massive ocean currents hold clues to how rapidly the Southern Ocean is changing. - Additionally the video discusses how the pteropods may provide an early warning of climate "tipping points" to come. - The video discusses the link between global climate and the potential changes in the formation of Antarctic bottom water, as well as ocean chemistry. - Comments from expert scientist: As for the scientific strengths, it is very clear and well explained. From the field to the lab to outlining results from the study, the story is easy to understand. This is also very actual science and a "hot" topic in marine science (ocean acidification in the Arctic Ocean and its impact on Limacina helicina). This is a very interested material and relevant to CLEAN. About the Pedagogy - The narrator asks focused questions in the video, and a transcript is provided for students to read. - This video explicitly discusses how our understanding of the climate system is improved through observations and data collection, as how these in turn help to inform predictive models. - A written transcript of the video is provided. Next Generation Science Standards See how this Video supports: Disciplinary Core Ideas: 6 MS-PS1.B1:Substances react chemically in characteristic ways. In a chemical process, the atoms that make up the original substances are regrouped into different molecules, and these new substances have different properties from those of the reactants. MS-ESS2.C1:Water continually cycles among land, ocean, and atmosphere via transpiration, evaporation, condensation and crystallization, and precipitation, as well as downhill flows on land. MS-ESS2.C4:Variations in density due to variations in temperature and salinity drive a global pattern of interconnected ocean currents. MS-ESS2.D1:Weather and climate are influenced by interactions involving sunlight, the ocean, the atmosphere, ice, landforms, and living things. These interactions vary with latitude, altitude, and local and regional geography, all of which can affect oceanic and atmospheric flow patterns. MS-ESS2.D3:The ocean exerts a major influence on weather and climate by absorbing energy from the sun, releasing it over time, and globally redistributing it through ocean currents. MS-ESS3.D1:Human activities, such as the release of greenhouse gases from burning fossil fuels, are major factors in the current rise in Earth’s mean surface temperature (global warming). Reducing the level of climate change and reducing human vulnerability to whatever climate changes do occur depend on the understanding of climate science, engineering capabilities, and other kinds of knowledge, such as understanding of human behavior and on applying that knowledge wisely in decisions and activities. Disciplinary Core Ideas: 5 HS-PS1.B3:The fact that atoms are conserved, together with knowledge of the chemical properties of the elements involved, can be used to describe and predict chemical reactions. HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space. HS-ESS2.D3:Changes in the atmosphere due to human activity have increased carbon dioxide concentrations and thus affect climate. HS-ESS2.E1:The many dynamic and delicate feedbacks between the biosphere and other Earth systems cause a continual co-evolution of Earth’s surface and the life that exists on it. HS-ESS3.D1:Though the magnitudes of human impacts are greater than they have ever been, so too are human abilities to model, predict, and manage current and future impacts.
http://cleanet.org/resources/42854.html
4
You Are Here Activity 2: Counting Circle Activity time: 8 minutes Preparation for Activity - Clear an area for participants to gather in a circle. Description of Activity Participants are challenged to pay extremely close attention to the rest of the group. In this game, they may speak only when their voice will not interrupt someone else. Gather everyone in a circle. Explain, in these words or your own: The objective of the game is simple: With everyone in a circle, individuals call out sequential numbers. For instance, Alyssa says "one," Kyle says "two," Nara says "three," etc. However, any time a person speaks over another person, the count starts over. Do not try to plan the order in which people will speak, and do not try to divvy up roles. Instead, try to pay such close attention to everyone in the group that you sense when there is empty space into which you can speak. Challenge the group to count to ten in this fashion. If the group accomplishes this goal quickly, try the game again, this time with all participants closing or covering their eyes. After the game, invite reflection with questions such as: - Did you contribute more to the success of the game by speaking, or by remaining silent? - Was there any way to tell when it was a good time to speak? How? - Was it easier or harder to get to ten than you anticipated? What made it easy or hard?
http://www.uua.org/re/tapestry/children/sing/session8/229983.shtml
4.03125
French Pronunciation Teacher Resources Find French Pronunciation educational ideas and activities Showing 1 - 20 of 179 resources Languages as Reflection of Cultures and Civilizations: French Speaking Countries Expand your class's vision of the French-speaking world by conducting this research project. Pupils focus on building 21st century skills while they look up information about a French country and put together presentations 7th - 12th Social Studies & History CCSS: Designed Pre-AP Strategies for French Language and Culture Build vocabulary, fluency and confidence in your French speakers by having them participate in some of these engaging activities. Several suggestions are given, but you will have to design the actual instructional activity yourself. 9th - 12th Languages Apprenons le Francais (Let's Learn French) Bonjour! Teach your class this basic greetings and much more with a unit of materials. Included here are lessons, vocabulary practice materials and activities, conversation practice handouts, word puzzles, and more to support your class... K - 8th Languages J'ai mal à la tête! (I have a headache!) -- Health Expressions in French Oh, no! Everyone is getting sick! Young French speakers use French expressions regarding physical health, some of which are idioms. With the use of health expressions provided in the lesson plan, pairs work together to write stories that... K - 4th Languages An Exploratory Approach to the Teaching of French in the Middle School Middle schoolers review the most recent vocabulary list of French words. Using literature by Victor Hugo and Guy de Maupassant, they discover the history and culture of France. Using a map and the text, they locate the cities and... 6th - 8th Languages
http://www.lessonplanet.com/lesson-plans/french-pronunciation
4
A genetic examination of tarsiers indicates that the saucer-eyed primates developed three-color vision when they were still nocturnal. A new study suggests that primates’ ability to see in three colors may not have evolved as a result of daytime living, as has long been thought. The findings, published in the journalProceedings of the Royal Society B, are based on a genetic examination oftarsiers, the nocturnal, saucer-eyed primates that long ago branched off from monkeys, apes and humans. By analyzing the genes that encodephotopigments in the eyes of modern tarsiers, the researchers concluded that the last ancestor that all tarsiers had in common had highly acute three-color vision, much like that of modern-day primates. Such vision would normally indicate a daytime lifestyle. But fossils show that the tarsier ancestor was also nocturnal, strongly suggesting that the ability to see in three colors somehow predated the shift to daytime living. The coexistence of the two normally incompatible traits suggests that primates were able to function during twilight or bright moonlight for a time before making the transition to a fully diurnal existence. “Today there is no mammal we know of that has trichromatic vision that lives during night,” said an author of the study, Nathaniel J. Dominy, associate professor of anthropology at Dartmouth. “And if there’s a pattern that exists today, the safest thing to do is assume the same pattern existed in the past. “We think that tarsiers may have been active under relatively bright light conditions at dark times of the day,” he added. “Very bright moonlight is bright enough for your cones to operate.” Via Dr. Stefan Gruenwald
http://www.scoop.it/t/dark-emperor-and-other-poems-of-the-night/p/4000275416/2013/04/19/for-early-primates-a-night-was-filled-with-color
4.03125
geom3d[triangle] - define a triangle triangle(T, [A, B, C], n) triangle(T, [l1, l2, l3], n) the name of the triangle A, B, C l1, l2, l3 (optional) list of three names representing the names of the x-axis, y-axis and the z-axis respectively. A triangle is a polygon having three sides. A vertex of a triangle is a point at which two of the sides meet. A triangle T can be defined as follows: from three given points A, B, C. from three given lines l1, l2, l3. To access the information relating to a triangle T, use the following function calls: returns the form of the geometric object (i.e., triangle3d if T is a triangle). returns the list of three vertices of T returns a detailed description of the triangle T. The command with(geom3d,triangle) allows the use of the abbreviated form of this command. define three points A⁡0,0,0,B⁡1,1,1, and C⁡1,0,2 define the triangle T1 that has A,B,C as its vertices geom3d[altitude], geom3d[area], geom3d[IsEquilateral], geom3d[IsRightTriangle], geom3d[objects], geom3d[sides] Download Help Document
http://www.maplesoft.com/support/help/MapleSim/view.aspx?path=geom3d/triangle